Basic Terminologies And Concepts



Formative and Summative Assessments

Formative assessment is used to measure whether learning objectives are being met during the learning process. For example, the teacher is trying to explain the physiology of the heart and he or she wants to know whether the students are grasping the concept. To accomplish that, the teacher will flash two or three questions on the screen that is related to the concept and ask the students to respond. The feedback generated will help the teacher measure the level of understanding so that subsequent and appropriate steps can be made. If the feedback shows a poor grasp of the concept, the teacher can always modify his or her instructional strategies to suit the learning needs of the class. Now, this is the beauty of formative assessment. It paves the way for the students to achieve optimal learning because it allows modifications of instructional strategies along the way. Furthermore, it allows the students to monitor their own learning progress and it provides opportunity to identify their areas of learning that need fine-tuning. In this manner, it makes them feel that they’re in charge of their own learning journey and in effect empowers them.

On the other hand, summative assessment is used to measure whether the desired learning outcome is achieved at the end of the learning process. Its feedback summarizes both instructional effectiveness and the quality of students’ learning. For teachers, it identifies weak points in instructional strategies and these weak points can be fortified and used to improve learning outcomes in subsequent courses. For students, it identifies the students’ level of proficiency or mastery at the end of the course. Since summative assessments measure the level of success of the students at the end of the race, modifications in terms of instructional strategies can no longer be made and the students’ opportunity to improve further also ends.

Reliability and Validity

Reliability and validity are two important concepts of assessment. They are not the same but they are intimately interrelated. Is it imperative that an assessment be both reliable and valid, for it to be useful and valuable? I believe so. Reliability gauges consistency while validity gauges accuracy and both are significant characteristics of an effective and good assessment.

I would try to explain the relationship of these two important concepts using my simple analogy. A digital thermometer measures the body temperature of patients. Since it’s electronic in nature, it’s powered by a battery. When health professionals are not mindful of this simple fact, it could mean the difference between life and death. When the battery of a digital thermometer is almost empty or “close to dying”, it will still consistently provide temperature readings of the patients but the question is, are those readings still accurate? Of course, the answer is no. Digital thermometers with almost empty batteries will always provide inaccurate readings that’s why health professionals must be prudent when using one. Now, this simple analogy shows the interrelatedness of reliability and validity. Furthermore, it reveals that not all reliable measurements are also valid measurements. On the other hand,  according to the experts, a measurement that is valid is almost always reliable.1

Norm- and Criterion-Referenced

Norm-referenced compares a student’s performance relative to the norm. This is typically used in assessments, where examinees are classified or ranked relative to their peers, and usually assigned a percentile score. Back in high school, I remembered I got a percentile score of 90 in science in NSAT (National Scholastic Aptitude Test). Now, this tells me that I scored higher than 90% but lower than 9% relative to the other examinees in science alone.

In criterion-referenced, the case is quite different. It compares a student’s perfomance relative to a specific criterion or criteria. The examinee is assigned a cut score, which reveals whether an examinee has successfully demonstrated mastery or proficiency of a particular skill. A good example of criterion-referenced test is a licensure examination. In 2007, I took my NCLEX-RN (National Council Licensure Examination for Registered Nurses). For months, I anxiously waited for the letter to arrive. When it finally came I was so happy to learn that I passed but also confounded at the same time. I was expecting to see a numerical score of my performance in the examination without understanding that the exam I took was not norm-referenced but rather criterion-referenced. The “pass” mark tells me that I have demonstrated mastery or proficiency of a particular skill, which means that I have met or perhaps surpassed the minimum requirements for someone to safely practice nursing in a particular U.S. state.

Formal and Informal Assessments

Formal assessment, from the word itself, uses formal means of measuring learning. Since it’s formal, it utilizes standardized measuring tools to assess degree of learning. Being standardized, it paints a general picture of a student’s overall achievement and compares it to his or her peers (norm-referenced) or against a particular set of criteria (criterion-reference).

In informal assessment, there’s a lack of standardized measuring tools to assess degree of learning. Thus, as I was reading about it, I asked myself how one uses or even measures it. Based from my readings, I learned that a teacher can use observation techniques and evaluate the students’ performance at the end of the exercise. I remembered back in Goethe-Institut Manila this is how we usually end our class. Fifteen minutes before the time, our teacher would try to assess what we’ve learned for the day. Initially, we would form a circle and the teacher will flash a card on the first student and ask him or her to formulate a question using the German word or phrase found on the card. The task of the next student is to formulate a response based on the question asked. The same routine goes on and on until the last student in the circle was able to participate. Throughout the exercise, our teacher was just silently observing and jotting down some notes. At the end of the exercise, she would discuss the correctness of our questions and responses. She would evaluate us based on relevance, coherence, and grammar.

Traditional and Alternative Assessments

Traditional assessment is the traditional or conventional way of measuring the level of learning. This type of assessment usually relies heavily on recall of facts, however when tailored well can be a good measurement of learning. It will determine whether the student has a sound understanding of the concepts or not. Now, even if it was tailored well to elicit high-level reasoning, there’s no way you can tell how the student arrived with the right answer. The student could have guessed and got the right answer out of sheer luck. In effect, there’s no direct evidence of learning.

On the other hand, alternative assessment is the non-traditional or unconventional way of measuring the level learning. It’s otherwise known as “authentic assessment” or “performance assessment.”2 It is authentic in nature because it relies heavily on application of knowledge. The central goal is to design assessment tasks that has real world significance that’s why learning becomes more meaningful to the students. Alternative assessment places greater emphasis on bringing out direct evidence of learning. For example, when I ask my nursing students to demonstrate the proper way to inject a medication intramuscularly then I would be able to see for myself if it was properly executed or not. Using rubrics, I would be able to measure whether a particular student has the competence to perform the required task that’s normally found in the real world. As to which form of assessment is better, experts say that teachers don’t need to choose between traditional and alternative assessments. Despite the inherent weaknesses of traditional assessments, it can be combined with alternative assessments to best meet the students’ learning needs.3



Assessment is a powerful tool that attempts to measure both students’ learning and the effectiveness of instructional strategies. Assessments can measure the level of understanding while learning is taking place or it can assess overall achievement at the end of the course or program. Assessments can also be utilized formally or infomally. In the diagram, assessment tasks such as, oral presentation can be utilized formally or informally depending on how you want the students to meet the learning objectives and achieve desirable or successful learning outcomes.

In 2004, Hanna and Dettmer proposed that we should strive to develop a range of assessments strategies that match all aspects of our instructional plans.4 I totally agree with this proposal because in reality students come in many shapes and forms. Some students are more comfortable with paper and pencil type of assessment but some students would perform well in oral presentations or debates. The diagram is therefore a useful guide for teachers in employing a wide range of assessment strategies to produce an unbiased picture of the what the students have actually learned, their learning styles, the areas they need help the most, and even their potential.

The diagram shows 4 assessment tasks: A, B, C, and D, which are being measured in terms of reliability and validity:

a. Assessment task A is reliable as evidenced by the clustered dots in the lower right quadrant but it is not valid because the clustered dots are off-center.

b. Assessment task B is both reliable and valid as evidenced by the clustered dots in the center.

c. Assessment task C is both not reliable and not valid  as evidenced by off-center and sporadic dots.

d. Assessment task D is not reliable as evidenced by the sporadic dots but valid because one of the dots is at the periphery of the center.

Practical Application:

To illustrate an example of Assessment Task A on the diagram, I will use a simple restaurant customer satisfaction survey. This type of questionnaire would normally ask questions about the quality of food or the cleanliness of the place to improve the level of service. Just for the sake of making a point, let’s say it included questions about job satisfaction. Even if the questionnaire contains questions that are irrelevant to the survey, it will still yield consistent results but apparently it is not accurate because it’s not what the survey originally intended to measure. The original purpose was to measure the level of restaurant customer satisfaction and not the level of job satisfaction.

1 Reliability and Validity. Retrieved from

2 Comparing Traditional and Performance-Based Assessment. Retrieved from

3 What is Authentic Assessment? Retrieved from

4 Hanna, G. S., & Dettmer, P. A. (2004). Assessment for Effective Teaching: Using Context-Adaptive Planning. Boston, MA: Pearson A&B.


Assessments for Young Children. Retrieved from

Atherton J S (2011) Teaching and Learning; Assessment. Retrieved from

Comparing Traditional and Performance-Based Assessment. Retrieved from

Formative and Summative Assessment. Retrieved from

Formative vs. Summative Assessments. Retrieved from

Norm-Referenced vs. Criterion-Referenced Testing. Retrieved from

Norm- vs. Criterion-Referenced Scoring: Advantages & Disadvantages. Retrieved from

Reliability and Validity. Retrieved from

Reliability and Validity. Retrieved from

What are the Different Forms of Authentic Assessment? Retrieved from

What is Authentic Assessment? Retrieved from

Whys and Hows of Assessment. Retrieved from

This entry was posted in EDS 113: Principles And Methods Of Assessment. Bookmark the permalink.

Leave a Reply

Fill in your details below or click an icon to log in: Logo

You are commenting using your account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s