F.+Samples+of+e-learning+activities

E learning activity examples:

__**Week1: Moving to e- learning**__ There are a number of issues in converting and undertaking courses on line compared to the traditionally teacher centred methodology of face to face learning and teaching. The attitudes and experiences of both teachers and student to the on-line teaching delivery need to be taken into account.The attitudes and experiences of stakeholders who are the recipients of the graduates form these on-line courses also need to be highlighted. There limitations to on-line learning even at postgraduate level. First there is the assumption that the student is already familiar with the technology. Even within this course there is an array of options from being presented with Moodle when Blackboard has been a preferred and comfortable format the use of E-portfolio technologies such as Mahara and use of Wiki spaces to name a few examples. Todd (2009), concluded that teachers possess varying degrees of technology literacy and this caused concern for student and teacher. economic considerations also arise with facilities balancing their capacity to service diverse student populations. Resourcing required by teachers to create and upload the resources in comparison to traditional teaching methods is often overlooked. Todd's article ( 2009) depicts the teachers’ perspective in terms of technology comfort and knowledge as well as time allocations. Todd (2009) further concluded that on-line delivery should supplement teaching rather than be the sole source of information. Todd, N 2009, ‘Converting and undergraduate nursing course to mostly on line- one experience’, //Distance Learning//, vol. 6, no. 3, pp. 15-22, (on line EBSCOhost).
 * Reference:**

Additional information of comparisons between online and traditional face to face teaching can be found at : []

__** Week 2 Activity **__ by [|Katrina Maree Lane Krebs] - Friday, 9 September 2011, 01:35 PM

I agree that the marking rubic and the outcomes of the assessment item need to be clearly defined and communicated. For the benefit of marking and student satisfaction with their learning journey. In many universities( based on my experience), there is a great deal of emphasis in a written essay on the detail in terms of structure (worth 20%) which details presentation, line spacing, abstract table of content etc... This is a checklist so to speak, of the required components of the paper. On average (again from subjective experience) a further 20% of marks pertain to the referencing section- which includes consistency with required style, number of resources used etc... again another checklist component.

Students with skills in formatting and referencing have already been presented with the ability to obtain 40 % of marks for a paper requiring now only 10 % of marks (out of potentially 60% for approach and argument component which is essentially the critical thinking component of the assessment) and then will pass the assessment item. It appears therefore, that the student can (theoretically) pass the assessment item without demonstrating a sound depth of understanding of the content.

I believe that within the section on structure, there needs to be attention to the clarity and logical connection of the information presented of the topic at hand and the comprehensiveness of the view or consideration of alternate views. this needs to be a core component of the 20% of allocation of marks for this component.

By communicating this type of assessment structure the student can then be assessed on their content knowledge of the subject matter more accurately rather than their ability to apply a "formatted structure" to an essay assignment. Further interest in the marking rubic is analysed by Harrell (2005) in term of the breakdown of assessment criteria within the various sections.

Morosov (2011) found that students had more satisfaction with their learning journey when presented with a very detailed marking rubic that had a main component of marks directed towards the critical thinking component.

Harrell, M 2005, 'Grading According to a Rubric', //Teaching Philosophy//, 28, 1, pp. 3-15, Academic Search Complete, EBSCO//host//, viewed 8 September 2011.
 * Reference: **

Morozov, A 2011, 'Student attitudes toward the assessment criteria in writing-intensive college courses', //Assessing Writing//, 16, 1, pp. 6-31, Academic Search Complete, EBSCO//host//, viewed 8 September 2011.

by [|Peter Donnan] - Friday, 9 September 2011, 02:46 PM || // Skills in 'formatted structure' certainly count for a lot in university courses - referencing, clarity of argument etc. To some extent students develop these skills as they progress through a course and even tools such as EndNote, MS Word and Refworks assist with some of the formal elements of style. // // In the eReserve readings, there is an assessment grid by Chris Rust that contains a lot of examples of rubrics that can be adapted for different purposes and in different disciplines - worth a look. // // I liked this posting - crisp, focused and supported by evidence. // // Peter // || Using assessment rubrics and fine tunic the grid from generic examples []
 * Re: Week 2 Activity
 * // Hullo Katrina, //
 * Additional information ** linked to Rubrics can be located at
 * [] **

__** Designing rubrics for higher order thinking ( specifically within Nursing) **__ []

__** Week 3: **__ Best practice: (i) offer options through student election of either a group or individual assessment activity to foster motivation (ii) learning does not only have to be the most visable task- latent learning is valuable (iii) learning journey should produce a high level of satisfaction
 * Best practice ideas around assessment **

Within a course that I have taught for a number of semesters, students are presented with the option of either a group or individual submission.The course involves analysis of behavioural aspects of health practices. Groups have a maximum of five members and the groups are determined by the student rather than an allocation by the coordinator.

The feedback from the student's experience is varied. generally, the students working in groups indicated that the assignment took longer to complete- due to the to and fro in the forms of emails, on-line meetings, forum and skype discussions. However, the majority of students ( from within groups) indicated they learnt more from this ongoing discussion of the learning content then they perceived would have been possible if they had embarked on the assignment alone. Students within groups on average received higher marks that those students who elected to worked on their own. This teis in with the group assessment and evaluation concepts.

The latent learning that occurred was the behavioural aspects of members within a group environment. Students also indicated that their use of technology improved as they were able to try out new methods with no pressure or assessment task attached. Group participants reported a greater understanding of the course content material (based on self evaluation) and scored higher on the learning journey satisfaction indicators.

__** Week 4: Constructive alignment **__ by [|Katrina Maree Lane Krebs] - Tuesday, 13 September 2011, 02:25 PM ||
 * Re: Constructive alignment
 * Having been the original author of a course some years ago it was then handed onto others who contributed their own flavour to the course over the semesters. The course was then returned to me at the end of three years for the review and alignment against the learning outcomes and graduate attributes. The GA as they now stood were a new area of alignment for the course to be evaluated against. My hypothesis was that additional components would be required to be added to meet all of the GA. Interestingly, I was found to be incorrect in my assumption in the collaborative framework that had resulted over the various semesters, I found that ALL GA were met when a thorough analysis of the course content, delivery and assessment were undertaken. ||

__** Week 4: Re: e-assessment **__ My current course has five on line quizzes in total each worth 10% of the overall marks. In preparing the quizzes I reflected on the purpose of the assessment. My issues – based on previous experience in coordinating the course is that students were not utilising the set text for the course as fully as they should. The quest then became how to have the student become familiar with the text book. This was a foundation issue as the key concepts discussed in the text were later required to be linked in an essay style assignment where students selected their own case study and analysed behaviours.

The quiz then was structured in a multiple choice format with five possible responses. Quizzes were open for four weeks and students were able to have two attempts at each quiz. The first attempt redirected the student to the appropriate section of the text so they could review the material before completing the second attempt. There was also a penalty factor involved in taking a second attempt. Moodle automatically recorded the highest mark. So the goal was to have the students become familiar with the text, to highlight key terms and concepts and to have the student perform basic critical analysis form a brief scenario. Students indicated they appreciated the two attempts as this enhanced their understanding. They also appreciated that the quizzes were not timed and they were able to utilise their textbook throughout. it should be noted that the quiz questions were not just replicate the facts or find the word type selections, rather the student had to have an understanding of the term and apply this to a brief scenario. The immediacy of the response was also a favourable factor.

//Comment from Susan:// // Thanks Katrina, it is good to see lecturers taking so much time to ensure their MC quizzes are so well aligned with course outcomes and learning objectives and that they are meaningful for students and do enhance their learning. I fear there are instances of lecturers using MC quizzes because they are easy to mark (Moodle does it for you), but I wonder about the depth of learning that occurs in these instances where students are not required to engage with the information, to apply it in different scenarios/contexts, to demonstrate that they have a deeper understanding of it. Most people can find and present information, but it is what we do with it/how we use it that is important. // // Susan //

__** Week 5: **perspectives on marking criteria__ Adapted from a previous forum post: the marking rubric and the outcomes of the assessment item need to be clearly defined and communicated this would be applicable to all courses and content. For the benefit of marking and student satisfaction with their learning journey. In many universities( based on my experience), there is a great deal of emphasis in a written essay on the detail in terms of structure (worth 20%) which details presentation, line spacing, abstract table of content etc... This is a checklist so to speak, of the required components of the paper. On average (again from subjective experience) a further 20% of marks pertain to the referencing section- which includes consistency with required style, number of resources used etc... again another checklist component.

Students with skills in formatting and referencing have already been presented with the ability to obtain 40 % of marks for a paper requiring now only 10 % of marks (out of potentially 60% for approach and argument component which is essentially the critical thinking component of the assessment) and then will pass the assessment item. It appears therefore, that the student can (theoretically) pass the assessment item without demonstrating a sound depth of understanding of the content. Therefore, skill in structure regardless of knowldge of content is applicable across any discipline. I believe that within the section on structure, there needs to be attention to the clarity and logical connection of the information presented of the topic at hand and the comprehensiveness of the view or consideration of alternate views. This needs to be a core component of the 20% of allocation of marks for this component. This is applicable to many disciplies as it allows for the expression of contracting apporaches and opinions- I see this working as well over a variety of courses e.g. from engineering to nursing.

By communicating this type of asessment structure the student can then be assessed on their content knowledge of the subject matter more accurately rather than their ability to apply a "formatted structure" to an essay assignment. Further interest in the marking grid is analysed by Harrell (2005) in term of the breakdown of assessment criteria within the various sections. Morosov (2011) found that students had more satisfaction with their learning journey when presented with a very detailed marking rubic that had a main component of marks directed towards the critical thinking component. Harrell, M 2005, 'Grading According to a Rubric', //Teaching Philosophy//, 28, 1, pp. 3-15, Academic Search Complete, EBSCO//host//, viewed 8 September 2011. Morozov, A 2011, 'Student attitudes toward the assessment criteria in writing-intensive college courses', //Assessing Writing//, 16, 1, pp. 6-31, Academic Search Complete, EBSCO//host//, viewed 8 September 2011.
 * Reference:**

The Otago(2005) article provides a clear and concise table detailing the valuation cycle. Of particular interest are the components of critical reflection, which breaches into changes to the teaching process and also changes to evaluation. In my experience, it must be remembered that a single course unit also sits within a semester program. With this in mind, reflection must also occur to take into account the timing of the evaluation. Feedback received, in both qualitative and quantitative terms from the students themselves as well as the lecturers interpretation of the results, indicates that students will not present their best work in instances where forum sign that for four subjects are due at the same time. This situation is emphasised particularly in the summer to component of the teaching turn. In determining what we must assess within our own units, consideration should be given to the overall timing aspect. This ties in with notion of changes to evaluation under the heading of critical reflection” what do the data tell me what should I do about it” (Otago 2005, p. 10). I would be interested to know if the qualitative aspect of feedback, for example students expressing dissatisfaction with the timing of assessment, is regarded as much emphasis is quantitative aspect of the assignment grade- in changes implemented accordingly.
 * Week 6 **

'Evaluation is often viewed as a test of effectiveness-of materials, teaching methods or whatnot- but this is the least important aspect of it" (Bruner cited in Ramsden 2003). My teaching philosophy includes providing an enjoyable experience for the student. Having had what I see, as the luxury of teaching non-graded courses, there appears to be a greater satisfaction of the student accompanied by a greater engagement with the course materials; in comparison to courses which are graded. Graded courses often see the student focusing only on the assessment items and this, in my experience is most often the case with on line courses. As a teaching scholar, I undertook postgraduate qualifications in developing the use of technology within the virtual classroom. Whilst undertaking this qualification, a great deal of experience was obtained in trialling various technological teaching tools-the downside of the course was that the assessment actually became an obstacle to a exploration of the vast array of tools I encountered. As an academic, on being involved in the evaluation of recent courses, it was interesting that the informal data indicated that some assessment items actually detracted from the students learning journey. At the end of the day however there needs to be an element for the accountability and measurement of the extent of learning that has occurred.
 * Week 7**

Ramsden, P 2003, Learning to Teach in Higher Education, Routledge, London.
 * Reference: **

Tools that are in place to detect plagiarism such as turn it in (often used within Moodle) may not pick up plagiarism if the students submits their assessment in the earlier phases (e.g. if student B who has copies student A's paper, submits their paper before student A then it would appear that student A is in fact the one plagiarising). The turn it in device also is not useful in instances where a template has been provided as this will automatically increase the turn it in similarity rates. Where limited resources are stipulated - eg.e students are required to create an annotated bibliography of a specialised topic area where limited literature is available, this will also lead to higher than recommended similarity scores. An interesting article by Thompsett and Ahluwalia (2010) indicates that students find the turn-it in score somewhat confusing and not of great value. This contrasts with Batane (2010) who demonstrated that once students were aware that assignment would be screened for plagiarism there was a noted decrease in the practice. Batane, T 2010, 'Turning to Turnitin to Fight Plagiarism among University Students', //Educational Technology & Society//, 13, 2, pp. 1-12, ERIC, EBSCO//host//, viewed 14 November 2011.
 * Week 8 **
 * Reference: **

Thompsett, A, & Ahluwalia, J 2010, 'Students Turned Off by Turnitin? Perception of Plagiarism and Collusion by Undergraduate Bioscience Students', //Bioscience Education//, 16, ERIC, EBSCO//host//, viewed 14 November 2011.

In terms of peer evaluation – previously emphasised in this unit - it is necessary to know who the target audience or stakeholders will be. Baume (2011) highlights the need to select peers who are trusted to deliver unbiased views and constructive feedback. Now to play the devils advocate, if a teaching rating was contributed to by a peer assessment, what type of reviewer would be selected? Is this an evaluation due to mandated requirements or genuinely contributing to the improvement of the teaching quality,,, resources or what ever? Also, the knowledge base of the reviewers in terms of the emphasis placed on evaluation is also a further consideration. Within recent course development a peer who was external to the faculty was asked to be involved in the construction of a new course and online teaching site as a peer reviewer- this staff member was astounded by that amount of effort that was directed to developing (planning) evaluation and even suggested that this aspect should require a workload allocation of its own. This ties with Elison and Berkeley (1991) and Reeves and Hedberg (2003) elements of a budget 9 or in this case workload allocation) as a factor in the development of evaluation planning.  Elson, D & Berkeley, C 1991, national Cnetre for research in Vocational Education Education” A look at Planning and Evaluation linkages across the nation” online EBSCO//host.//
 * Week 9 **
 * Reference: **

Reeves, T & Hedberg, J 2003, interactive Learning Systems Evaluation, Educational Technology Publications, Englewood Cliffs.

I agree with Susan's post. In some instances the feedback from students can be reduced little more than a popularity contest between lecturers. An enthusiastic outgoing and engaging lecturer may receive more positive feedback than their more stoic peer however on analysis of the students grades, there may be little variation between each lecturer's groups. Another limitation of feedback is that often the course content is associated with the lecturer- for example a physiology or anatomy unit may require many hours of study as this unit is essentially a factual recall unit. In comparison a unit that analyses forensic behaviours may be more appealing to the students because of the nature of the topic. Using students evaluation comments as a contributor to teaching ratings need to be attempted with a great deal of caution and this may lead to unfair rankings (as expressed by Au 2011).
 * Week 10**
 * Reference: **

Au, W 2011, 'neither Fair nor Accurate: research -based reasons why highstakes tests should not be used to evaluate teachers', Rethinking Schools, vol. 25, no. 2, pp.34-38.

Week 11
In relation to plagiarism, it was recently brought to my attention that an essay from a previous offering of a course has been posted as a "essay for sale". Having tracked down the original essay (by a past student) the plan devised in consultation with a peer,is to submit the 'view for fee sample section' through moodle and then see how many other essays appear for the current assessment with a similar content or reference list. The essay topic is generic in that the specific components can be applied to a topic of the student's own interest. Apart from plagiarism, another activity is the use of outsourcing to write essays for students. I remain astonished that while some students can barely write a coherent email requesting an extension, within weeks they are submitted assignments that are of such high caliber that one would question the authorship. Authorship is also questionable when an exceptionally high degree of insight and of depth of knowledge displayed in a foundation course assignment. Again these points while raising questions are not necessarily covered by specific policy.

** Week 12 **
The moderation process detailed by this post is a standard and reasoned action approach that is adopted across our faculty. The development of the moderation process can in itself be linked to the Plan- Act- reflect Framework and this is evident in the modifications that occur in the practice of moderation.

I do agree with the nominated reading for this week in that the moderation process is can add additional delays in providing feedback to students. Moderation becomes a highlighted issue and a necessary workload factor in large courses- in order to achieve a transparent and equatable grade allocation to all students the process of strict moderation must be implemented. Apart form large class sizes- the need for more frequent ( or increased numbers of papers selected for moderation) is highlighted where there are marking staff that are new to this role. Additional perspectives on moderation can be found in the following article:

Hunter, K, & Docherty, P 2011, 'Reducing Variation in the Assessment of Student Writing', //Assessment & Evaluation In Higher Education//, 36, 1, pp. 109-124, ERIC, EBSCO//host//, viewed 15 November 2011.