Friday, April 19, 2013

Program Evaluation and Improvement

Program Evaluation and Improvement (Yuanjie and Kanita)




prgram evaluation video




Program evaluation and improvement

Program evaluation is an essential organizational practice in social programs. However, it is not practice consistently across program areas, nor is it sufficiently well-integrated into the day-to-day management of most programs.

Effective program evaluation is a systematic way to improve and account for public health and social service actions by involving procedures that are useful, feasible, ethical, and accurate.

The author provides a framework, which is composed of six steps that must be taken into account for any evaluation.

Step 1: Engage stakeholders

Step 2: Describe that program

Step 3: Focus the evaluation design

Step 4: Gather credible evidence

Step 5: Justify conclusion

Step 6: Ensure use and share lessons learned

Stakeholders must be engaged in the inquiry to ensure that their perspectives are understood. When stakeholders are not engaged, an evaluation might not address important elements of a program’s objectives, operations, and outcomes.

Program descriptions convey the mission and objectives of the program being evaluated. Descriptions should be sufficiently detailed to ensure understanding of program goals and strategies.

The evaluation must be focused to assess the issues of greatest concern to stakeholders while using time and resources as efficiently as possible

An evaluation should strive to collect information that will convey a well-rounded picture of the program so that the information is seen as credible by the evaluation’ primary users.

The evaluation conclusions are justified when they are linked to the evidence gathered and judged against agreed-upon values or standards set by the stakeholders.

Deliberate effort is needed to ensure that the evaluation processes and findings are used and disseminated appropriately.

 

One of the most significant benefits that a program evaluation communicates is the need to make improvements.

Examples of the types of improvements that an evaluation may reveal could include:

·         The elimination of program services and activities that do not achieve program outcomes

·         The addition of other services and activities that are better designed to achieve outcomes

·         Acquiring more adequate resources to support program services and activities

·         Target a different group of participants to receive program services because there are an inadequate number of specified participants available to receive services

These and/or other types of program improvements that are revealed by the evaluation findings should be viewed and communicated as an opportunity to make the program better.


How have you used the CDC framework in your line of work to evaluate something?

What other frameworks to find to be more effective then this particular framework?

What are some possible ways in which you can engage stakeholders?

What important factors can you take away from the youtube video link posted?




Reference:

Posavac EJ, Carey RG. Program evaluation: methods and case studies. Englewood Cliffs. NJ: Prentice-Hall, 1980.

Eddy DM. Performance measurement: problems and solutions. Health Aff 1998;17(4):7-25.

Thursday, April 4, 2013

Why Should We Beta Test?

http://www.digitalwhip.com/wp-content/uploads/2012/08/shutterstock_48434191-617x416.jpg
By: Katie Horst

What is beta testing?
Beta testing is when a product is tested by real customers in their real environments before it is ever out. During a beta test, the company and the customers form a partnership in order to make the product reach its full potential.

More about beta testing:


Why should we beta test?
The overall purpose of conducting a beta test is to create the best possible product. By doing a beta test, the company is receiving feedback, and they’re gaining more knowledge about their product before it is ever released. This way, improvements can be made before the product is available to the public. Beta tests can help to develop a product that has improve customer satisfaction, better reviews, increased sales, fewer problems/less support costs, a more positive brand image, etc. It’s essential to do a beta test in order to make sure that you are getting a product that consumers actually want.  Beta tests also provide an opportunity for customers to get involved, and help improve a product that they’re interested in.

I found it interesting that companies such as Play Station, have programs that are offered to the public to participate in their beta testing. Here are the terms and conditions a person must follow: http://us.playstation.com/support/useragreements/ps_public_beta_test_program.html. Play Station allows for people to be given early access to certain things like games. Those people are then required to provide feedback about things such as the chat rooms, connectivity, and usability of the product.

Doing a beta test:
Click here for a great article if you want to know more about implementing a beta test. The article looks at why a beta test should be done. It also provides general guidelines, the planning process, evaluation process, and other helpful tip about doing a beta test.

Questions:
1)      Have you ever participated in a beta test, or have you ever implemented one? 
2)     How did you go about conducting a beta test for your project?
3)     By doing a beta test, do you think this will better your product?
4)     From your personal experiences, why do you think we should beta test?


Saturday, March 30, 2013

Why Do We Evaluate Instruction?


Evaluation is an integral part in the instructional design process. The purpose of evaluation post instruction is to allow for feedback for improvement. Forming and utilizing an effective evaluation approach allows for continuous improvement in training system quality. Evaluations are set in place to help determine the value of the learning program. By using assessment and validation tools, data is provided to help in the evaluative process. Overall, the purposes of the evaluation level of ADDIE is as follows:
1. Feedback
2. Control
3. Research
4. Intervention
5. Power Games

Feedback revolves around how one linked the learning materials and course content to the objectives set forth at the beginning.

Control and Research are the processes of making connections with real world aspects with the material you learned from the training program.

Intervention is when the results from the evaluative process influence change in the course program or its delivery.

Lastly, power games, which are not always prevalent, are an area of manipulation. This is a part of evaluation where the results are skewed to react with organizational politics.

The steps to evaluation include:

1. plan the evaluation
2. determine the evaluation design
3. develop the instruments
4. collect data
5. analyze the data
6. report the findings

While evaluation seems overarching as one principle, there are three types of evaluation. Formative evaluation, Summative evaluation and confirmative evaluation.

Formative evaluation is where there is continuous evaluation throughout the content. One example would be to have checkpoints throughout a training program that change the course of the learner based on responses provided to questions.

Summative evaluation is a knowledge understanding check at the end of a training program that evaluates the learner. Think of the prefix of the word, sum-, and being a total understanding. One example of summative would be a test at the end of a training course to check for understanding.

Lastly, Confirmative is when a period of time lapses between the end of a training program and when the evaluation is taken. An example is when the learner is retested 6 months after the training to check that learning has occurred and was retained from the course work.

Watch this YouTube video for further understanding of Evaluation types.



Questions:
1. When was a time that you had to evaluate instruction and how did you do so?
2. How would you apply the 3 types of evaluation to your Instructional Design Project?
3. What is your feeling on Power Games? Do you think evaluation measurements should be changed based on organizational politics?
4. Evaluation has to do with Kirkpatrick's 4 Levels including Reaction, Learning, Behavior, and Results. Which levels match with the types of evaluation(formative, summative, confirmative)?









References


Bramley, P. & Newby, A. C. (1984). The Evaluation Of Training Part I: Clarifying The Concept. Journal of European
& Industrial Training, 8,6, 10-16.
Foxon, M. (1989). Evaluation of training and development programs: A review of the literature. Australian Journal of
Educational Technology, 5(2), 89-104. http://www.ascilite.org.au/ajet/ajet5/foxon.html


http://www.learningsolutionsmag.com/articles/530/

Friday, March 22, 2013

Integrating New Technologies



This Rethinking Learning video from the MacArthur Foundation presents several thought-provoking comments on getting 21st-century students out of the 20th-century classroom, and some innovative examples of technology integration in education.

We are all familiar with the annual Horizon Report, "designed to identify and describe emerging technologies likely to have an impact on learning, teaching, and creative inquiry in higher education." In this report, key trends and technologies are profiled in terms of project time-to-adoption, ranging from one to five years. So, as educational technologists, the question isn't what new technologies are coming, or even necessarily when they will become mainstream, but how we should integrate these new technologies to best serve teaching and learning.

The constant proliferation of new tech options affords learners a never-ending opportunity to interact with peers, instructors, and course content in novel and engaging ways. At the same time, the pace of technological innovation is so rapid that it is longer possible for instructors to maintain real-time awareness of the latest options and to thoughtfully, creatively, and successfully apply them to educational contexts. And that's okay. In the words of instructional technologist Andrew Marcinek, "Educators should not pace education at the same pace at which technology moves. It is far too fast, and too sudden." Surveying the technology environment, understanding the educational context, and meeting learner and instructor needs at their current level of ability remains key when investigating a potential educational technology solution.

For reflection:
  1. In terms of new technology integration in the classroom, what is your experience, either as instructor or student? 
  2. Have you ever been required to use a "tech solution" that didn't match your needs? What happened? 
  3. As discussed in the MacArthur Foundation video, how can we more actively link informal, technology-rich learning outside of school with formal learning inside school?

                                                              References

Boss, S. (2011, Sep. 7). Technology Integration: What Experts Say. Retrieved March 22, 2013 at
http://www.edutopia.org/technology-integration-experts

Edutopia staff. (2007, Nov. 5). Why Do We Need Technology Integration? Retrieved March 22, 2013 at http://www.edutopia.org/technology-integration-guide-importance

Johnson, L., Adams, S., and Cummins, M. (2012). The NMC Horizon Report: 2012 Higher Education Edition. Austin, Texas: The New Media  Consortium. Retrieved March 22, 2013 at
http://www.nmc.org/publications/horizon-report-2012-higher-ed-edition

MacArthur Foundation. (2010). Rethinking Learning: The 21st Century Learner. [web video]. Retrieved March 22, 2013 at http://www.nmc.org/publications/horizon-report-2012-higher-ed-edition

Monday, March 18, 2013

ADP Prototype

https://sites.google.com/site/adpinstructionaldesignproject/

Friday, March 15, 2013

VMRC Prototypes

Below is the link to our prototype for the Power Tools and Fire Safety training modules for the Virginia Mennonite Retirement Community:

http://firesafetyandpowertoolstraining.weebly.com/


Alex Lee Jones, Sarah Peachey, and Shana Ryman


LFCC Math Tutorials- Alicia, Tara, Katie

Here is our prototype for LFCC Math Center project.  Our project is for creating math tutorials for students to review before taking a math placement test. We are still working on our quizzes and supplemental material for our client.  Please review and leave your comments!




Lord Fairfax Community College Academic Center for Excellence Math Tutoring Videos

Tara Cassidy, Alicia Grasso, Katie Horst  

March 14th, 2013


Module 1.1: Fractions: Defining the Basics

Objectives:
LFCC students taking Module 1.1 will be able to apply basic math concepts to fractions. After completing module 1.1, the learner will be able to
  • Define fractions and mixed numbers
  • Identify the parts of a fraction
  • Recognize when a fraction is in simplest form
  • Discriminate between a proper and improper fraction
  • Recognize the difference between dividing by zero and dividing into zero.

The proposed module is a streaming video tutorial that will prepare LFCC students to successfully apply college math concepts without a calculator when they take the math placement test. The core content in lesson 1.1 is basic college math vocabulary and concepts: fractions in simplest form, proper and improper fractions; mixed numbers; division by zero; division into zero.

The abstract math information will be paced for the novice learner and made concrete by presentation in video format, incorporating a student resource person (LFCC Math Consultant Mr. Jeremiah Dyke) into to video,offering a text transcription of Mr. Dyke’s instruction, and a supplemental glossary of math terms to accompany the video tutorial.
*Screen shot from video that introduces LFCC Math Consultant, Mr. Jeremiah Dyke


After each math concept is introduced and verbally explained by Mr. Dyke while he visually represents the concepts on his whiteboard, the video will pause and present the learner with an embedded quiz question to check for comprehension. This approach will enable comfortable self-pacing and avoid overloading the novice by breaking the larger set of introductory concepts and vocabulary into smaller, more manageable pieces of information.
*Screenshot of the student view of Mr. Dyke’s visual representation of a concept



The learner  will not proceed through the tutorial without correctly answering the assessment questions for each concept, in order as presented, for a given module. Learning will be measured by a comparison of pre-module and post-module scores on a basic fraction vocabulary quiz. Learners will not advance to the next module until the post- module score equals 100% correct.
Self Check:

0
--   =  ?
1

a.   1
b.   0
c.   0.1
Representation of an embedded quiz question for module 1.1
Self Check:

12       
--    
6       
                                        
What is the numerator in this fraction?
                                         
a.   12
b.   6
c.   2
Representation of an embedded quiz question for module 1.1



The tutorial and self-check quiz questions will be self-paced in order to accommodate LFCC students with widely-varying levels of math ability who wish to effectively prepare for the math placement test. Learners can engage the tutorial either as a simple refresher or as a first exposure to these math concepts, accessing this online module at any time and moving through the tutorial at his or her own comfort level.

Materials Needed For this Module:
  • A computer or web-enabled mobile device
  • Reliable Internet connection
  • Headphones (if completing the module in a public space)