National Council of Teachers of English (NCTE) Journal Features Ponder

We are honored to be featured in Dr. Kristen Hawley Turner and Dr. Troy Hicks’ piece in the NCTE’s November 2015 English Journal (Vol. 105, No. 2) entitled “Connected Reading Is the Heart of Research.”

The piece introduces a framework for teaching adolescents to read, but it also advocates a more thoughtful approach to the application of technology in the English classroom through an introspective exploration of what it means to be digitally literate.

NCTE Logo

“We must advocate for digital literacy, not just technology, in a way that reconceptualizes our discipline.”

A sample Ponder lesson walk-through in the piece explains how the collaborative annotation experience supports shared inquiry during research for an essay, and how Ponder’s interface speeds the teacher’s review of student activity, allowing them to better lead class discussion.

Both former K-12 English teachers, Dr. Turner is a professor of Curriculum and Teaching at Fordham University where she directs the Fordham Digital Literacies Collaborative, and Dr. Hicks is a professor of English Language & Literature at Central Michigan University where he directs CMU’s Chippewa River Writing Project.

Also check out their 2015 book Connected Reading: Teaching Adolescent Readers in a Digital World.

Video Timeline

New modes of interaction for Flip videos Part 3

This past semester I’ve been experimenting with new modes of interaction for video. I’ve written about 2 previous test sessions here and here.

Annotating video is hard. Video is sound, imagery moving through time. It’s an immersive and some might say brain-short-circuiting medium. Watching 3 videos simultaneously may be the norm today. However, if you’re truly engaged in watching video content, in particular content that is chock full of new and complex ideas, it’s hard to do much else.

Watching video content makes our brains go bonkers.

“’Every possible visual area is just going nuts,’ she adds. What does this mean? It shows that the human brain is anything but inactive when it’s watching television. Instead, a multitude of different cortexes and lobes are lighting up and working with each other…”

“She” is Joy Hirsch, Dir. of fMRI Research at Columbia U, being cited by the National Cable & Communications Association who interpret her results to mean watching tv is good for our brains, like Sudoku. I’m not sure about that, but it’s reasonable to conclude that consuming video content occupies quite a lot of our brain.

Of course no one is saying reading doesn’t engage the brain. However, one key difference between text and video makes all the difference when it comes to annotation: With reading, we control the pace of reading, slowing down and speeding up constantly as we scale difficult passages or breeze through easy ones.

Video runs away from us on its own schedule whether or not we can keep up. Sure we can pause and play, fast-forward and slow down, but our ability to regulate video playback can only be clunky when compared to the dexterity with which we can control the pace of reading.

In fact the way researchers describe brain activity while watching tv sounds a lot like trying to keep up with a speeding train. All areas of the brain light up just to keep up with the action.

So what does that mean for those of us building video annotation tools?

Video annotation has all the same cognitive challenges of text annotation, but it comes with additional physiological hurdles as well.

STEM v. The Humanities

I’ve been working off the assumption that responding to STEM material is fundamentally different from The Humanities. For STEM subjects, the range of relevant responses is much more limited. It essentially amounts to different flavors of “I’m confused.” and “I’m not confused.”

I’m confused because:

  • e.g. I need to see more examples to understand this.
  • Syntax! I don’t know the meaning of this word.
  • How? I need this broken down step-by-step.
  • Why? I want to know why this is so.
  • Scale. I need a point of comparison to understand the significance of this.

I get this because:

  • Apt! Thank you. This is a great example.
  • Got it! This was a really clear explanation.

Humor is a commonly wielded weapon in the arsenal of good teaching so being able to chuckle in response to the material is relevant as well.

But as is often the case when trying to define heuristics, it’s more complicated than simply STEM versus not-STEM.

Perhaps a more helpful demarcation of territory would be to speak in terms of the manner and tone of the content (text or video) and more or less ignore subject matter altogether. In other words: The way in which I respond to material depends on how the material is talking to me.

For example, the manner and tone with which the speaker addresses the viewer varies dramatically depending on whether the video is a:

  •  “How-to” tutorial,
  • Expository Lecture
  • Editorializing Opinion
  • Edu-tainment

The tutorial giver is explaining how to get from A to Z by following the intervening steps B through Y. First you do this, then you do that.

The lecturer is a combination of explanatory and provocative. This is how you do this, but here’s some food for thought to get you thinking about why that’s so.

The editorializing opinion-giver is trying to persuade you of a particular viewpoint.

Edu-tainment is well, exactly that. Delivering interesting information in an entertaining format.

And of course, the boundaries between these categories are sometimes blurry. For example, is this Richard Feynman lecture Expository Lecture? or Editorializing Opinion?

I would argue it falls somewhere in the middle. He’s offering a world view, not just statements of fact. You might say that the best lecturers are always operating in this gray area between fact and opinion.

The Test Session

So in our 3rd test session, unlike the previous 2, I chose 3 very different types of video content to test.

Documentary on The Stanford Prison Guard Experiment (Category: Edu-tainment)

A 10-minute segment of the Biden v. Ryan 2012 Vice Presidential Debate re: Medicare starting at ~32:00. (Category: Editorializing Opinion)

Dan Shiffman’s Introduction to Inheritance from Nature of Code (Category: Expository Lecture)

You can try annotating these videos on Ponder yourself:

  1. Dan Shiffman’s Introduction to Inheritance from Nature of Code.
  2. Biden v. Ryan Vice-Presidential Debate.
  3. The Stanford Prison Experiment documentary.

The Set-up

There were 5 test subjects, watching 3 different videos embedded in the Ponder video annotation interface in the same room, each on their own laptop with headphones. That means unlike previous test sessions, each person was able to control the video on their own.

Each video was ~10 minutes long. The prompt was to watch and annotate with the intention of summarizing the salient points of the video.

2 students watched Dan Shiffman’s Nature of Code (NOC) video. 2 students watched the documentary on the Stanford Prison Experiment. And 1 student watched the debate.

The Results

The Stanford Prison Experiment had the most annotations: 15/user versus 12 for NOC and 5 for the debate, and the most varied use of annotations: 22 total versus 5 for NOC and 4 for the debate.

Unsurprisingly the prison documentary provoked a lot of emotional reactions (50% of the responses were emotional – 12 different kinds compared to 0 emotional reactions to the debate).

Again unsurprisingly, the most common response to the NOC lecture was “{ chuckle },” it was 12 of the 25 responses. There was only 1 point of confusion around, a matter of unfamiliarity with syntax: “What is extends?”

This was a pattern I noted in the previous sessions where in many STEM subjects, everything makes perfect sense in the “lecture.” The problem is oftentimes as soon as you try to do it on your own, confusion sets in.

I don’t think there’s any way around this problem other than to bake “problem sets” into video lectures and allow the points of confusion to bubble up through “active trying” rather than “passive listening.”

Intro to Inheritance - NOC Intro to Inheritance – NOCBiden v. Ryan Vice-Presidential Debate Biden v. Ryan Vice-Presidential Debate Stanford Prison Experiment Stanford Prison Experiment

Less is More?

There are 2 annotation modes in Ponder. 1 displays a small set of annotation tags (9) in a Hollywood Squares arrangement. A second displays a much larger set of tags. Again the documentary watchers were the only ones to dive into the 2nd set of more nuanced tags.

Less v. More Less v. More

However, neither student watching the documentary made use of the text elaboration field (they didn’t see it until the end) where you can write a response in addition to applying a tag whereas the Nature of Code and Biden-Ryan debate watchers did. This made me wonder how having the elaboration field as an option changes the rate and character of the responses.

Everyone reported pausing the video more than they normally would in order to annotate. Much of the pausing and starting simply had to do with the clunkiness of applying your annotation to the right moment in time on the timeline.

It’s all in the prompt.

As with any assignment, designing an effective prompt is half the battle.

When I tested without software, the prompt I used was: Raise your hand if something’s confusing. Raise your hand if something is especially clear.

This time, the prompt was: Annotate to summarize.

In retrospect, summarization is a lot harder than simply noting when you’re confused versus when you’re interested.

Summarization is a forest-for-the-trees kind of exercise. You can’t really know moment-to-moment as you watch a video what the salient points are going to be. You need to consume the whole thing, reflect on it, perhaps re-watch parts or all of it and construct a coherent narrative out of what you took in.

By contrast, noting what’s confusing and what’s interesting is decision-making you can do “in real-time” as you watch.

When I asked people for a summarization of their video, no one was prepared to give one (inspite of the exercise) and I understand why.

However, one of the subjects who watched the Stanford Prison Experiment documentary was able to pinpoint the exact sentence uttered by one of the interviewees that he felt summed up the whole thing.

Is Social Always Better?

All 3 tests I’ve conducted were done together, sitting in a classroom. At Ponder, we’ve been discussing the idea of working with schools to set up structured flip study periods. It would be interesting to study the effect of socialization on flip. Do students pay closer attention to the material in a study hall environment versus studying alone at home?

The version of Ponder video we used for the test session shows other users’ activity on the same video in real-time. As you watch and annotate, you see other people’s annotations popping up on the timeline.

For the 2 people watching the Stanford documentary, that sense of watching with someone else was fun and engaging. They both reported being spurred on to explore the annotation tags when they saw the other person using a new one. (e.g. “Appreciates perspicacity? Where’s that one?”)

By contrast, for the 2 people trying to digest Shiffman’s lecture, the real-time feedback was distracting.

I assigned an annotation exercise to another test subject to be done on her own time. The set-up was less social both in the sense that she was not sitting in a room with other people watching and annotating videos and she was also not annotating the video with anyone else via the software.

I gave the same prompt. Interestingly, from the way she described it, she approached the task much like a personal note-taking exercise. She also watched Shiffman’s Nature of Code video. For her, assigning predefined annotation tags got in the way of note-taking.

Interaction Learnings

  • The big challenge with video (and audio) is that they are a black box content-wise. As a result, the mechanism that works so well for text (simply tagging an excerpt of text with a predefined response tag) does less well on video where the artifact (an annotation tag attached to timecode) is not so compelling. So I increased emphasis on the elaboration field, keeping it open at all times to encourage people to write more.
  • On the other hand, the forest-for-the-trees view offered on the video timeline is I think more interesting to look at than the underline heatmap visualization for text so I’ll be looking for ways to build on that

    Timeline

    Timeline

  • There was unanimous desire to be able to drag the timecode tick marks after they had already submitted a response. We implemented that right away.
  • There was also universal desire to be able to attach a response to a span of time (as opposed to a single moment in time). The interaction for this is tricky, so we’ve punted this feature for now.
  • One user requested an interaction feature we had implemented but removed after light testing because we weren’t sure if it would prove to be more confusing than convenient: automatically stopping the video whenever you made mouse gestures indicating you’re intending on creating an annotation and then restarting the video as soon as you finished submitting. I’m still not sure what to do about this, but it supports the idea that the difficulty of pacing video consumption makes annotating and responding to it more onerous than doing the same with text.

Takeaways

  1. Annotating video is hard to do so any interaction affordance to make it easier helps.
  2. Dense material (e.g. Shiffman’s lecture) is more challenging to annotate. Primary sources (e.g. the debate) are also challenging to annotate. The more carefully produced and pre-digested the material (e.g. the documentary), the easier it is to annotate.
  3. With video, we should be encouraging more writing (text elaborations of response tags) to give people more of a view into the content.
  4. Real-time interaction with other users is not always desirable. Users should be given a way to turn it on/off for different situations.
  5. There may be a benefit to setting up “study halls” (virtual or physical) for consuming flip content, but this is mere intuition right now and needs to be tested further.

Last but not least, thank you to everyone at ITP who participated in these informal test session this semester and Shawn Van Every and Dan Shiffman for your interest and support.

Implementing Flip: Why higher-order literacy is not just about text

Within the field of “instructional” EdTech, Ponder is often described as a “literacy” tool, which while accurate, encompasses a much broader spread of pedagogical challenges. We usually describe our focus as “higher-order” literacy – the ability to extract meaning and think critically about information sources.

A couple months ago we began our pilots of Ponder Video, bringing our patent-pending experience to the medium more often associated with the flipped classroom. From our experience with text over the past two and a half years, we knew this would be an iterative process, and as expected we are learning a lot from the pilots and the experimentation – see part 1 and part 2 of our interface studies.

During this process, some people have asked if Ponder Video is, in startup terminology, a “pivot”; a change of strategy and focus of our organization. The question: do we still consider Ponder a literacy tool?

After a bit of reflection the answer is a resounding YES! And the process of reflection helped us gain a deeper appreciation for what “literacy” actually means. This is not a change of strategy, it is an expansion of Ponder to match the true breadth of literacy.

Literacy > Text

The term literacy is most often thought of as the ability to decode words and sentences. That is, of course, the first level of literacy, but there is a shifting focus in many of the new pedagogical and assessment debates, from the Common Core to the SAT, a shift away from memorizing facts and vocabulary towards students developing a higher-order literacy. Still, higher-order literacy is a vague concept, and at Ponder we are always searching for ways of articulating our vision more clearly.

One line I like, from a now deprecated page of ProLiteracy.org with no by-line, does a really nice job of concisely capturing the significance of a broad definition of literacy: “literacy is necessary for an individual to understand information that is out of context, whether written or verbal.”

The definition is so simple, you might miss its significance. So let me repeat it:

“literacy is necessary for an individual to understand information that is out of context, whether written or verbal”

I like it because “understand information” goes beyond mere sentence decoding, and “out of context” unassumingly captures the purpose of literacy – to communicate beyond our immediate surroundings. The “or verbal” I would interpret broadly to include the many forms that information comes in today – audio, video and graphical representations.

The 21st century, at least so far and for the foreseeable future, is the interconnected century, the communication century, the manipulated statistics century, the Photoshopped century, perhaps the misinformation/disinformation century, and I would posit that if there is one “21st century skill” that we can all agree on, it is literacy, in the broad sense:

Understanding information out of context.

A text or video is inherently out of context, so a student at home is not only one step removed from the creator of the content, but also removed from the classroom. So a question immediately jumps to mind:

Are your students ready to learn out of context?

The answer to this question varies dramatically, and is not easily delineated by grade level; defining that readiness to provide an appropriate scaffold requires care, and is something we have worked to understand empirically through student activity in Ponder.

The National Center for Education Statistics, part of the US Department of Education, has put a lot of effort into defining and measuring this skill, and have twice performed a survey they call the National Assessment of Adult Literacy (NAAL), providing a useful jumping off point for thinking about your students.

This is not like one of those surveys you read about. It is a uniquely thorough survey that consists of a background questionaire and screening process, followed by an interview.

The NAAL is made up 100% of open-ended, short-answer responsesnot multiple choice – and focuses on the participants ability to apply what they have read to accomplish “everyday literacy goals”. You read something, then answer a question that depends on something you have to have extracted from the reading.

As you might imagine, this is not a quick process.

Administering the NAAL takes 89 minutes per person and in 2003 was administered to 18,000 adults sampled to represent the US population. That’s almost 26,700 person-hours or three person-years of interviewing.

This thoroughness is important given that they are trying to measure a broad definition of literacy.

The NAAL breaks literacy into three literacy skill types:

  • Prose
  • Document
  • Quantitative

You can read the details on their site, but given that it turns out American adults have roughly comparable prose and document literacy scores, I would lump them together under a general heading of “reading.” Examples of quantitative literacy tasks are reading math word problems, balancing a checkbook or calculating interest on a loan.

They delineate four literacy levels:

  • Below basic
  • Basic
  • Intermediate
  • Proficient

Again, they go into a lot of detail mapping scores on to these names, but I think what’s most useful are the “key abilities” that distinguish each level in their definitions.

My interest in higher-order literacy immediately takes my eye to the key distinction between “Basic” and Intermediate. An intermediate skill level means the individual is capable of:

“reading and understanding moderately dense, less commonplace prose texts as well as summarizing, making simple inferences, determining cause and effect, and recognizing the author’s purpose”
NAAL Overview of Literacy Levels

That list of skills captures the starting point of what we think of as higher order literacy. (If you’re curious, the highest level of literacy, modestly labeled “Proficient,” seems to mostly be distinguished by the ability to this sort of analysis across multiple documents.)

For me, the NAAL provides a useful framework for breaking down the literacy problems that instructional techniques (and technologies) are trying to address.

Ponder supports teachers who are trying to move their students from a level of basic literacy to being able to make inferences, determine cause and effect, recognize the author’s purpose.

…but our goal is to go an important step beyond even the NAAL’s definition of literacy.

Because what is the point really of making inferences and identifying cause and effect if ultimately you are unable to probe with your own questions and evaluate with your own conclusions?

In the end, the end-game of literacy is the so-called ability to “think for yourself.”

Flip is a great way to practice literacy. But you need literate students to flip.

The flipped classroom model is typically used for students to dig into and prepare for class discussion, and obviously presumes a basic student literacy level. But passively consuming a video or skimming a text isn’t enough to drive discussion back in class. As we all know from our own student days, technically meeting the requirements of having “done the reading” does not comprehension make.

Flipping, more so than traditional classroom lectures, requires students to be able to dig beneath the surface of the content, question its credibility, ask clarifying questions and make their own inferences.

Such are the makings of a classic Chicken and Egg conundrum. Flipping requires students to have the skills they are still trying to learn and master through…flipping.

I don’t think anyone has claimed to have answered this question yet, and neither have we, but the first step is realizing what you don’t know, and we do claim to have done that! We will continue to share the learnings from our video research as we iterate on Ponder Video, and welcome more ideas and discussion from teachers everywhere.

Population by Prose Literacy Level (Courtesy NAAL)

Population by Prose Literacy Level (Courtesy NAAL)

Curious about the numbers? The NAAL has been run twice – once in 1993, and a second time in 2003, and there wasn’t a big change in the scores in those ten years, except a slight increase in quantitative literacy. However, we have a pretty serious higher-order literacy problem. Between 34% and 43% of adult Americans lack the higher order literacy skills to be classified as “intermediate” or above by the NAAL.

Logging Tutorial

New modes of interaction for Flip videos Pt. 2

This semester in addition to teaching, I am a SIR (Something-in-Residence) at ITP, NYU-Tisch’s art/design and technology program.

My mission for the next 3 months is to experiment with new modes of interaction for video: both for the flip video and live events. (Two very different fish!)

User Study No. 2

I recently wrote about my first User Study with students from Dan Shiffman’s Nature of Code class. A couple of weeks ago, 3 students from Shawn Van Every’s class, Always On, Always Connected volunteered to watch 4 videos. In this class, students design and build new applications which leverage the built-in sensor and media capabilities of mobile devices.

The Setup

Again, there were no computers involved. We screened the videos movie-style in a classroom. Instead of asking for 2 modes of interaction (Yes that was super clear! versus Help!), I asked for a single mode of feedback: “Raise your hand when you’ve come across something you want to return to.”

The 4 videos introduce students to developing for the Android OS. It’s important to note that simply by virtue of working in the Android development environment (as opposed to Processing) the material covered in these videos is much more difficult than what was covered in the Nature of Code videos the previous week. However, the students’ level of programming experience is about the same.

What happened…

Video 1: Introduction to logging

Zero hands. But from the looks on people’s faces, it was clear it was not because everyone was ready to move on. A few things to note:

  1. For screencasts, watching a video movie-style is extra not-helpful as it is difficult to read the text.
  2. The video begins promptly after a 20s intro. With instructional videos, the emphasis is always on brevity (the shorter the better!) In this case however, I wonder if 20s isn’t enough to allow you to settle down and settle into the business of trying to wrap your head around new and alien concepts. I’m sure #1 made it harder to focus as well.
  3. Reading code you didn’t write is always challenging and I think even more so on video where we’re much more tuned into action (what’s changing) and much less likely to notice or parse what’s static.
  4. Unlike before, when I asked after-the-fact, “Where in the video do you want to go back to?” the students were unable to respond. Instead the unanimous response was, “Let’s watch the entire video again.” This is where collecting passive data about the number of times any given second of a video is watched by the same person would be helpful.
  5. In general, questions had to do with backstory. The individual steps of how to log messages to the console were clear. What was missed was what and why. First, what am I looking at? And second, why would I ever want to log messages to the console? I say “missed” and not “missing” because the answer to those questions were in the videos. But for whatever reason, they were not fully absorbed.
  6. Last but not least, I have to imagine that this watching as a group and raising your hands business feels forced if not downright embarrassing.

Hopefully, a software interface for doing the same thing will elicit more free-flowing responses from students as it will provide them with a way to ask questions “in private.”

Videos 2-4

Everyone was more settled in after watching the logging video. Each subsequent video built on the momentum accumulated in the previous one. With Video 2, I started to get some hand-raising starting with video 2. when we returned to those points in the video, people were very specific about what was confusing them.

“Why did you do that there?” or it’s converse, “Why didn’t you do that there?” was a common type of question as was “What is the significance of that syntax?”

Another way to look at it is: There were never any issues with the “How.” The steps were clearly communicated and video is a great medium for explaining how. The questions almost always had to do with “Why?”, which makes me wonder if this is the particular challenge of video as a medium for instruction.

Does learning “Why?” require a conversation initiated by you asking your particular formulation of “Why?”

Video 2: Toasts

Video 3: Lifecycle of an Android App Part 1

  • @2.00: What is the significance of @-id? Why aren’t you using the strings file for this?
  • @6:50: Why did you change arg0 to clickedView?

Other syntax questions included:

  • What’s the difference between protected versus public functions?
  • What’s extend and implement?
  • What’s super()?
  • What’s @override

All of these syntax questions pointed to a much larger / deeper rabbit hole having to do with object-oriented programming and encapsulation, a quick way to derail any getting started tutorial.

In general though, I don’t think these syntax questions prevented you from understanding the point of the videos which were about creating pop-up messages (Video 2) and the lifecycle of Android apps (Videos 3 and 4) (when do they pause versus stop, re-initialize versus preserve state).

Video 4: Lifecycle of an Android App Part 2

In Video 4, there were a lot of nodding heads during a section that shows the difference between pausing/resuming and killing/creating an Android app. The demonstration starts around @2:20 and goes for a full minute until @3:23. Epic in online video terms. It’s followed by a walk-through of what’s happening in the code and then reinforced again through a perusal of Android documentation @3:50 where an otherwise impenetrable event-flow diagram is now much more understandable.

It’s also important to note that both the demo which starts at @2:20 and the documentation overview @3:50 are preceded by “downtime” of trying, failing and debugging starting the Android emulator and navigating in the browser to find the flow diagram.

In general there’s a lot of showing in these videos. Each concept that’s explained is also demonstrated. This segment however was particularly lengthy and in accordance with the law of “It-Takes-A-While-To-Understand-Even-What-We’re-Talking-About” (#2 above), my completely unverified interpretation of events is that the length of the demonstration (far from being boring) helped everyone sink their teeth into the material.

What’s Next and Learnings for Design

As we head into testing actual software interfaces, this 2nd session gave me more concrete takeaways for workflow.

  1. You lay down “bookmarks” on the timeline as you watch to mark points you would like to return to.
  2. You lay down “bookmarks” on the timeline after you’ve watched the video to signal to the instructor what you would like to review in class the next time you meet.
  3. You can expand upon “bookmarks” with an actual question.
  4. You select from a set of question tags that hopefully help you formulate your question. (More to come on this.)

While it’s important to break down the videos into digestible morsels, it’s also important to make it easy to watch a few in a row as the first one is always going to be the most painful one to settle into. There are ways to encourage serial watching with user interface design. (e.g. playlist module, next video button, preview and auto-play the next video.)  But perhaps something can be done on the content side as well by ending each video on a so-called “cliffhanger.”

 

Magnitude! Direction!

New modes of interaction for Flip videos Pt. 1

This semester in addition to teaching, I am a SIR (Something-in-Residence, no joke) at ITP, NYU-Tisch’s art/design and technology program.

My mission for the next 3 months is to experiment with new modes of interaction for video: both for the flipped classroom and live events. (Two very different fish!)

Magnitude! Direction!

Magnitude! Direction!

2 weeks ago, I conducted the first in a series of informal user studies with 3 students from Dan Shiffman’s Nature of Code class which introduces techniques for modeling natural systems in the Java-based programming environment Processing .)

User Study No. 1

Last year, Dan flipped his class, creating a series of ~10 minute videos starting with how to use random() to move objects around a screen to modeling the movement in ant colonies.

The Setup

We viewed the first 2 videos from Chapter 1 which introduce the concept of a vector and the PVector class in Processing.

There were no computers involved. We watched it together on a big screen, just like you would a movie except I sat facing the students.

The only “interaction” I asked of them was to:

  1. Raise their right hand when Dan said something extra clear.
  2. Raise both hands to indicate “Help!”

What happened…

On the whole, the “interaction” was a non-event. There were a few tentative raisings of the right hand and zero raisings of both hands.

However, after each video when I asked, at what points in the video were things not perfectly clear? There were no hesitations in the replies.

Points of confusion fell mostly into 1 of 2 categories:

  1. I simply need more background information.
  2. Maybe if I rewatch that part it will help.

Although I will say that in my subjective option, number 2 was said in a tentative and theoretical fashion, which is interesting because “the ability to re-watch” is a much-vaunted benefit of flip.

It must be said though, that we’re early in the semester and the concepts being introduced in these videos are simple relative to what’s to come.

Learnings for design…

Some early thoughts I had coming out of this first session were:

  • Unlike reading, video is too overwhelming for you to be able to reflect on how you’re reacting to the video while you’re trying to take it in. (Just compare reading a novel to watching a movie.)
  • That being said, so long as the video is short enough, people have a pretty good idea of where they lost their way, even after the fact.
  • Still, there needs to be a pay-off for bothering to register where those points are on the video. For example, “If I take the time to mark the points in the video where I needed more information, the instructor will review them in class.”
  • It’s still unclear to me how a “social” element might change both expectations and behavior. If you could see other people registering their points of confusion and asking questions and you could simply pile on and say “I have this question too” (many discussion forums have this feature), the whole dynamic could change.
  • Hence, the popularity of inserting slides with multiple choice questions every few minutes in MOOC videos. The slides serve as an explicit break to give the viewer space to reflect on what they’ve just watched.

I wonder though if they need to be questions. Genuinely thought-provoking questions in multiple choice form are hard to come by.

In a flip class where there is such a thing as in-class face time, a simple checklist of concepts might suffice.

Here were the concepts that were covered. Cross off the ones you feel good about. Star the ones you’d like to spend more time on. I think it’s key that you are asked to do both actions, meaning there shouldn’t be a default that allows you to proceed without explicitly crossing off or starring a concept.

Concepts for Video 1.1:

  • A vector is an arrow ★ @2:00 : Why is it an arrow and not a line?
  • A vector has magnitude (length).
  • A vector has direction (theta) ★ @3:30: What’s theta?
  • A vector is a way to store a pair of numbers: x and y.
  • A vector is the hypotenuse of a right triangle (Pythagorean Theorem).

Concepts for Video 1.2:

  • PVector is Processing’s vector class.
  • How to construct a new PVector.
  • Replacing floats x,y with a PVector for location.
  • Replacing float xspeed, yspeed with a PVector for velocity.
  • Using the add() method to add 2 PVectors.
  • Adding the velocity vector to the location vector.
  • Using the x and y attributes of a PVector to check edges.

The tricky thing with Video 1.2 is that the only point of confusion here was @10:27 when Dan says almost as an aside that you add the velocity vector to the location vector which you can think of as a vector from the origin. I don’t think this kind of problem area would surface in a list of concepts to cross out or star.

Next Session

In session 2, I will be conducting another user study with students from Shawn Van Every’s class “Always On, Always Connected” (an exploration of the technologies and designs that keep us online 24/7).

My plan is to try asking the students to raise their hand simply to register: I want somebody to review what happened at this point in the video.

We’ll see how it goes!

Lesson Plan: 8th Graders on the Boehner Budget Deal

This is a guest post by Cason Given, 8th grade social studies teacher at The Trinity School in New York City.

If you’ve read any of my previous posts, you know I am a fan of Ponder. Ponder is a tool that allows students to read current event texts (student-selected or teacher-selected), and then tag those texts with sentiments and themes. It’s a great tool for increasing student engagement, for giving students a stake in their own learning process, for monitoring student comprehension, and, generally, for developing student awareness about what’s happening in the world around them.

This week, I had the pleasure of meeting Ponder’s founder, Alex Selkirk, at the 1776 Challenge Cup, a start-up competition in Chelsea. Personable, incredibly smart and humble, this guy is awesome! I got to hear all about Ponder’s origins, and how the tool really started out as an interactive reading tool for friends to have insight into what their buddies were reading. I’m so glad this idea was then applied to an educational setting; it has been a tremendous success in my own classroom.

Budget Deal Article View

Budget Deal Article View

In the wake of meeting Alex, I have had all sorts of ideas for the application and expansion of this tool. What if sentiments were differentiated for readers of differing strengths? What if articles could “talk” to one another (interactive debates from left-leaning and right-leaning sources reporting on the same issue; fact-checking; presenting the “devil’s advocate” position; supporting a point or elaborating on one with another source)? What if some of my elementary-school-teaching friends picked this tool up and began applying it in classrooms for younger students? What if tutors began using this product to monitor tutoree’s comprehension in the days between sessions?

I see endless possibilities for this product. I am excited about opening communication with the team. It’s so encouraging and energizing to be around people who want to promote learning tools for learning’s sake. After all, isn’t that why all educators (those of us who teach formally and those of us who teach in less traditional spaces) get into teaching?

As for my own use of Ponder, it was applied in a new way this week in my classroom. On Friday, 8s opened up their laptops to read three pre-selected articles regarding the recent House budget deal. Two articles were overviews (taken from CNN and Time, respectively), and one was a piece regarding how Senator Cruz is already pushing back on the deal because it funds Obamacare (taken from Politico). This lesson approach is a new application of Ponder for me; I have previously used it for independent reading and class discussion. Since our periods are relatively short (40m), we only touched on our discussion by class’s end. We will continue analyzing the articles on Monday after 8s finish taking their Intro to China quizzes (FUN!).

Some screencaps from student discussion [Note: I am limited by what I can show because of student usernames and protecting their identities; this is only a very small sample of the discussion going back and forth!]

House Shutdown Response Budget Compromise Myth

 

 

 

 

Cruz Problem: Budget funds Obama Care

 

 

 

 

 

 

 

 

 

 

2 responses from the same student.

Multiple Choice v. Free Response

Is multiple choice really so bad?

For the humanities, we answer unequivocally: “sorta.” If any of the truly important questions about the human condition had unambiguously correct answers, human history would have been a long boring tale of comity and well-being.

As a result, any attempt to assess subjects in the humanities through multiple choice cannot and do not broach the questions dearest to humanists. Instead they must skirt and skulk around looking for secondary signs of an engaged and thinking mind. We ask our students to identify metaphors and supporting arguments without also asking: How evocative is the metaphor? How convincing is the argument?

Why? Because multiple choice questions as we all know must by definition contain at least one unambiguously correct answer. And we also know, the more interesting the question the less clear the answer.

Still, is there a way to do multiple choice that’s open-ended and allows for asking questions with no right answer? We think we have a (nuanced, textured) answer to that question. We have embraced multiple choice, sorta …by turning it inside-out and upside-down.

In Ponder multiple choice, the questions are predefined and the possible answers are infinite. The questions are not questions for the students (e.g. Can you identify the topic sentence of this paragraph?), they’re questions for the author of the text, opportunities to “talk back” to the reading and it’s up to the student to figure out where and how to ask them.

After all, we learn by asking questions, not answering them.

Our readers have a more important task than simply finding examples of Metaphor versus Simile, Hubris versus Pride, Compromise versus Conciliation (themes predefined by teachers). They are also asked to identify examples that perplex them, intrigue them, shock, disgust, inspire, agitate, make them wonder if someone might be a touch hysterical or someone else is oversimplifying something that deserves more serious consideration (reactions predefined by us). They don’t simply observe and identify, they analyze and evaluate, both the author’s ideas and their own reactions to those ideas. In a word, they think.

Still, assessment is simple. Though Ponder is not a Scantron machine that can tell you automatically who was right and who was wrong, it’s concise, data-rich, and designed to call attention to good work. It is easy for both teachers and classmates to evaluate each response.

  1. Has it substance?
  2. Is the reaction apt?
  3. Are the themes apt?

If not, let’s talk about it!

2 responses from the same student.

Same student, 2 different responses. Both are invitations if not provocations to ask Why?

And Ponder is only getting smarter. While we can’t pass absolute judgement on student responses, what we can do automatically is build a nuanced profile of each student over time.

  • Who’s taking the time to read in-depth articles?
  • Who’s expanding beyond their comfort zone to read about new subject areas?
  • When confronted with something confusing, who’s able to identify exactly how they’re confused?
  • Who’s reacting emotionally? Who’s able to evaluate the soundness of logic?
  • Who’s figured out how to get their classmates interested in what they’re interested in?
  • Who’s good at starting conversations?

What we’re interested in is the ability to paint a portrait of readers that reflects their level of curiosity, comprehension, self-awareness and awareness of others.

We can’t produce a number with the finality of a Scantron machine.

But really, what does a 67 versus an 83 really mean when we’re talking about the Bill of Rights or the Leaves of Grass?

You’ll be able to explain that number better with the insights Ponder affords you into your students’ thinking. Multiple choice is really not so bad if you don’t let it kill the open-ended nature of intellectual inquiry. We like it, sorta.

Identifying Teaching Moments at the NYC DOE Shark Tank

On Friday we were invited to present at NYC DOE’s Teacher Shark Tank, one event in a series where three edtech startups get 30 minutes each to present and answer questions from DOE teachers.

The Teacher Shark Tank is hosted by iZone, NYC DOE’s Office of Innovation…which supports schools in personalizing learning to accelerate college and career readiness among our students.

Ponder is running in many schools across the country this semester, but in our hometown of New York, we are in one NYC DOE school (Stuyvesant H.S.), as well as one NYC Charter (WHEELS) and one NYC private school ( Trinity School). This was our first opportunity to formally present to DOE educators at a DOE-organized event, so we were excited to be there!

Other presenters included Quill, who has figured out a way to blend learning grammar into an interactive reading experience and Fast Fig, a word processor for math that enables teachers to cleanly and easily create equations and graphs online – a long sought after solution with many applications!

We had a late start, but this didn’t deter the great group of interested and engaged teachers who are clearly the vanguard of technology users at their schools (City as School, High School of Telecommunications Arts and Technology and P.S. 64 the Robert Simon School)

We wanted to impress this audience in particular. Fortunately, over the past two years of watching classes use Ponder (first graduate business classes then undergraduate philosophy classes then 12th grade English classes and 9th grade global studies classes and now 2nd grade ELA classes!!) we’ve evolved how we present and explain Ponder.

In our presentation Friday, Ben and I focused on one key concept: the speed at which a teacher can review student micro reading responses. How fast can a teacher review Ponder micro-reading responses you ask? Real fast. Fast enough that teachers can encourage their students to make as many responses as they’d like, knowing they will have time to grade them all and provide meaningful feedback. In fact, our conceit (which has proven true in higher ed and is starting to prove itself in K12 as well) is that not only will the instructor be able to review everyone’s responses, they’ll be able to do so *before* class starts, and actually use their students responses as the basis for in-class discussion.

To prove my point, Ben and I put up four different Ponder micro-reading responses from a single 8th grade class in the Chicago Public School system and asked the teachers in the room how quickly they could assess each one.

Number 1: A solid response.

No. 1 Coherent and appropriate.

No. 1 Coherent and appropriate.

The excerpt that the student chose is coherent, though it’s not making a particularly controversial or insightful point. The sentiment s/he applied is appropriate though not particularly nuanced (I empathize.) nor does it exhibit a deeper insight or independent thinking.

 

Number 2: Exemplary!

No. 2 Real insight and independent thinking!

No. 2 Real insight and independent thinking!

The excerpt is coherent and interesting, making a surprising, counter-intuitive argument.The sentiment applied is spot on, demonstrating the student clearly understands the author is making a claim and now needs to substantiate it with supporting evidence.

Number 3: Red Flag!

No. 3 Incoherent and inappropriate.

No. 3 Incoherent and inappropriate.

The selection itself is incoherent. And the sentiment is clearly inappropriate. Either the student is completely lost and doesn’t understand the point of the assignment or is simply not trying at all.

Number 4: A Teaching Moment.

No. 4 What is there to agree about?

No. 4 What is there to agree about?

This is where things start to get interesting. This is an opportunity for what would pedagogically referred to as a “teaching moment,” an invitation for further discussion in class. First of all, the selection itself is interesting. The author describes an interaction that is clearly intended to provoke some sort of emotional reaction from the reader. However, the student chose to agree with it – not the reaction the author probably intended! So, why did you concur? What are you agreeing with? What is the idea that you thought emerged from this quote? Or, perhaps, you’ve identified a moment in which the student wasn’t reading very carefully at all, which is valuable in and of itself.

We maintain a long list of ideas on how to better support this process of evaluating reading responses. It changes week to week as we watch our K12 classes settle into how to use Ponder while discovering new uses for it as well.

Still, I think we’ve reached an important milestone in delivering on the promise of providing a way for students to “practice critical reading” while giving teachers a way to respond to and build on that practice.

And, let it not go without saying, we are lucky to have such thoughtful students and teachers using Ponder that we can so easily find a mountain of interesting responses!

pet_peeve_114

Can critical reading also be fun?

Last week I wrote about the challenges and pitfalls of allowing students to write elaborations in their micro-reading responses. But what have we to say on the subject of engaging students in reading?

It’s great how specific common core gets about what it really means to read critically:

Determine what the text says explicitly…make logical inferences…cite specific textual evidence…support conclusions…Determine central ideas…analyze development…summarize key supporting…Analyze how and why…Determine technical, connotative, and figurative… analyze specific word choices…Analyze structure…Assess point of view…Integrate and evaluate…visually and quantitatively…Delineate and evaluate arguments and specific claims…validity of reasoning…relevance and sufficiency…comprehend complex literary and informational texts independently and proficiently.

But boy does it sound like work! And (dare I say it) not of the fun variety.

At the end of the day, “getting good” at critical reading comes down to practice! practice! practice! Quantities of practice beyond completing homework assignments; quantities of practice that are unreasonable for a teacher to grade; quantities of practice that can only result from embracing a habit of reading for the sake of enjoyment, not grades.

I love reading!

I love reading! (and xkcd)

Good readers love to read, “poor” readers don’t. Reading makes you a better reader.

Like so many other intractable problems, figuring out how to improve the lot of “poor readers” is another one of these depressing chicken-and-egg situations.

So how about getting specific about how to make reading fun? …in 4th grade, in high school, in grad school, in life?

We certainly don’t pretend to have any sure-fire answers. What we do have are a set of assumptions and biases we’ve collected over the past couple of years of piloting Ponder in the classroom:

  1. Students are more engaged if they have a say in what they’re engaging with.
  2. An excellent course reader is the backbone of any syllabus. Still, primary and journalistic texts are always going to be more interesting than textbooks.
  3. Self-direction can’t be “taught”, but like a muscle, it can be developed through assignments that leave room for self-determination.
  4. You can teach techniques for critical reading, but “getting really good at it” can only be achieved through cultivating a regular habit of close reading.
  5. The only way to “debug” reading problems is to gain a view into what students are thinking while they read. Writing is better than multiple choice for such assessment. However, what’s needed is a solution that allows students to “practice” critical reading several dozen times a day in a way teachers can actually stay on top of and respond to.
  6. Critical reading is most fun when it’s a way to challenge how you think and feel about the world and least fun when it’s an end in and of itself.

I’ll go one step further and say, identifying thesis statements and supporting evidence are means, not ends.

They are technical way stations on the path to discovering the surprising, counter-intuitive insights that are the real reasons why “good readers” love to read.

Such sweeping statements are very fine in the abstract, so next up: Getting specific about assignments and lesson plans to make reading fun!

Ponder now supports writing…with conditions.

Google “essay-grading rubric” and you will find endless resources describing a well-written essay.

  • clearly articulated
  • bristling with energy
  • identifiable structure
  • provides a clear overview
  • logical order
  • stays on topic
  • clear, well-focused
  • well supported
  • detailed and accurate
  • introduce main topic
  • author’s purpose is clear
  • vivid words and phrases
  • accurate
  • natural
  • lean, economical
  • easy to understand
  • not overly repetitive
  • and did I mention it should be clear…and focused?
1960s Dare Devil

How dashing! (From legendaryauctions.com)

I can picture it now personified in the form of a 60s era superhero RUBRICMAN!

Clear! 
Purposeful! 
Accurate! 
Focused! 
Vivid and Energetic!

We have all had the experience of reading a piece of writing that met the letter of such a rubric, yet failed in every way to actually say anything or be about anything. (Just check out anypolitician.com’s platform page.)

Are we in danger of simply teaching students to master the art of political-speak? Grammatical, vigorous phrases peppered with topical keywords bolstered by statistics from reputable sources, topped off with a garnish of “Even sos, Howevers and Neverthelesses” to intimate that a great struggle between thesis and antithesis are taking place to produce a hard-wrought synthesis: “In conclusion…”

Is teaching-to-the-rubric any better than teaching-to-the-test?

A human can certainly discern when a student “uses complicated terms without fully considering their implications.” But can a machine? (I’ll leave that for another blog post.)

At Ponder, we see the value of such rubrics, yet we remain wary of their pitfalls, particularly where software is involved.

As a result, we’ve recently added written responses to Ponder…with caveats.

Click the pencil to write

Click the pencil to elaborate!

Readers must still first select from Ponder’s predefined menu of sentiments. Any written response is presented as an opportunity to elaborate on the sentiment you chose.

In that sense, written responses in Ponder are more “elaboration” than “free response.”

Why? Surely “writing your mind” is a more challenging and meaningful exercise than selecting from a menu of predefined choices.

Yes, absolutely! …provided there is someone (read: the teacher) to provide feedback (read: call b.s.).

Without timely feedback, written response can easily become a bureaucratic exercise: Writing for the sake of having written.

Therefore, given the time-pressures of teaching, we’re sticking with the idea that written responses in Ponder should be used sparingly and strategically, guided by the process of sentiment-selection.

How? Ponder sentiments are precise leaving little room for the kind of vagueness that allows you to say something without saying anything. Confused? What do you need to un-confuse yourself? Having a strong reaction? Is it intellectual or emotional? Positive or negative? Sarcastic or earnest?

Our sentiments are by no means perfect or complete. They are a work-in-progress and always will be. If you complained that they are a rather blunt instrument for expressing the finer points of human studies, we would agree with you while maintaining that they are still better than providing no structure at all. Nevertheless, Ponder sentiments’ bluntness is their principle strength.

They mean what they say and by pushing you the reader to choose one, they also prompt you to seriously consider what you mean by what you say.

Once you’ve committed to a sentiment, you are free to elaborate, equivocate, and caveat to your heart’s content. But first, like a disenchanted voter in the ballot box, you must cast your lot, clearly and purposefully.