Ponder Video Interface

Critical Watching: Drilling Down into the Video Heatmap

Late last fall we released powerful heatmap filtering for the Ponder reading experience. We are now proud to announce a similar upgrade for Ponder Video.

Ponder Video Activity Bar

Teachers have been experiencing hundreds and hundreds of student responses on longer videos and it became obvious that we would need to make it possible to separate out participation based on groups, students, sentiments and themes, just the way we do on text documents.

We love our tick marks and the quick overview they give you, so you’ll find them in their usual place along the yellow video timeline. As before, there’s one tick mark per response at a particular timestamp in the video and the colors match the type of sentiment. Multiple responses still at the same time stamp stack on top of each other, so you can spot the points of focus by both tight clustering and the height of the bars.

Ponder Video Interface

 

 

 

 

Bookmarking stars appear below the timeline and your familiar zoom in/out UI helps with navigating longer videos. More on using the interface for critical watching on our support site.

Ponder Video Interface

 

 

 

 

 

 

 

 

But now here’s where it gets fancy. Notice anything different?

Now, above the timeline, underneath the video, you will find a set of filter drop-downs corresponding to the activity on the video.

Ponder Video Filter Menus

 

 

The first drop-down allows you to filter the responses by group; for example, so a teacher can see one section or period of a course they are teaching at a time. The numbers in parentheses indicates the number of responses created by that group.

Ponder Video Group Filter Menu

 

 

 

Want to see just your responses, or those from a particular student?  The second drop-down shows each responder, sorted by the number of responses they created which are indicated in parenthesis adjacent to each username.

Ponder Video User Filter

 

 

 

The third drop-down shows the mix of sentiments used in the responses, sorted by frequency (indicated in parenthesis), and allows you to filter for them.

Ponder Video Sentiment Filter

 

 

 

 

And the fourth drop-down shows the themes used in responses on the document, sorted by frequency indicated in parenthesis:

Ponder Video Theme Filter

 

 

 

As you can see, much of the Ponder power you are familiar with when navigating ideas across documents are now available for minute dissections of a single video.

And don’t forget, these capabilities are all available for custom integration on your platform through the Ponder API.

The Guardian: Readers absorb less on Kindles than on paper, study finds

E-reading is bad for reading. Now what?

Researchers continue to show that people retain less, comprehend less and do both less well, when reading digital texts compared to reading paper texts. Moreover, what little we retain from e-text we take less seriously. There is concern that e-reading is an additional threat to already embattled humanities and worse, reading ON-line deteriorates your ability to read OFF-line.

(Hello! Eyes here!)

Indeed, our intuition as readers can guide us in this question:

  • Devices inherently present more distractions.
  • There are tactile, physical components to reading offline that are clearly missing online.

And those are just the cognitive deficits. There a whole host of IT problems that make the cognitive ones seem trivial.

If you measure online-texts against what paper-texts are good at (freedom from distraction, physical cues to provide context and focus your attention), it is no revelation that online texts will lose every time.

So, if you are thinking of trading a paperback or a Xerox in your classroom for a screen, don’t do it?

Unless you have a really good reason. Like, if my students can look up words while they read they’re much more likely to keep at reading hard texts. If my students have a smart way to track their reading across lots of different documents, they have a much easier time seeing the connection between texts and as a result write better papers.

These are things computers are good at. So if we start with what computers are good at and we measure paper texts against online texts, there should also be no surprise that online texts will win (provided the software delivers on its promise).

Research is starting to support some of these claims. A study published recently by researchers at National Chengchi University demonstrated improved comprehension of a text with the help of a scaffolded annotation tool. Research at San Francisco State showed similarly exciting educational outcome improvements.

So here’s a different rule of thumb to consider, one informed by the research I cite above and reinforced by countless conversations with teachers:

If you’re considering moving to e-texts, don’t, unless it does nothing short of transforming your classroom in ways that paper can’t and has something to do with learning, not functionality.

ie. This tool will help me push my students to re-read passages they didn’t fully understand which in turn will get them to be more proactive about asking questions in class. As compared to: This tool makes it possible for my students to see each other’s comments as they read. (The latter is a description of software functionality, and a rather high-level one at that, which may or may not be implemented in a way that has pedagogical value.)

Sometimes an instructor knowing specifically why and how they’re going to use a certain tool makes all the difference in efficacy, so two teachers using the same software can experience drastically different results.

Other times, getting specific with what you intend to use software for is precisely the “missing information” you need to separate the wheat from the chaff when evaluating tools.

In other aspects of our lives, this would be considered stating the obvious. After all, knowing what I care about in a product is how I evaluate the relevance of other people’s reviews of said product, which is why online review forums typically ask if you found the review helpful, as opposed to if you found it informative.

However, for whatever reason, this is still something we’re learning to do with edtech.

In either case, the logistical wins of going from paper to digital alone are not big enough to offset the logistical problems of managing digital, or the cognitive hit we all take when reading from a screen.

 

New modes of interaction for Flip: Annotating streaming video!

At the end of ever Spring Semester, the extended ITP community gathers round for a solid week (Monday-Friday, 9:40AM-6PM) of thesis presentations. It’s part gauntlet and part send-off for the graduation class.

This year, with the help of Shawn Van Every (the original architect, builder and maintenance man of the ITP Thesis Streaming Service), I had the opportunity to continue my run of informal experiments in video annotation by trying it out on a handful of thesis presentations.

For the third year running, Thesis Week has been accompanied by the Shep chatroom, created by ITP alumn Steve Klise. Shep immediately turned itself into a great place for backchannel commentary on the presentations…or not. I’ve always felt it would be great to see aggregations of Shep activity attached to the timecode of the presentation videos. Shep conversations unfortunately aren’t logged. I also wondered if turning Shep into some kind of “official record” of audience reactions to thesis would be something of a killjoy.

With the Ponder-Thesis experiment, I wasn’t exactly sure where “annotating thesis presentations” would fall on the work-fun spectrum.

It might be seen as some sort of “semi-official record.” That would make it “work,” and perhaps a bit intimidating, like volunteering to close-caption T.V. programs.

But annotating with Ponder requires more thinking than close-captioning which presumably will soon be replaced by voice recognition software if it hasn’t been already. So maybe it can be fun and engaging in a the same sort of “challenging crossword puzzle” way that Ponder for text can be.

Either way, the end-goal was clear, I was interested in some sort of read out of audience reactions: Cognitively (Did people understand the presentation?); Analytically (Was it credible, persuasive?); and Emotionally (Was it funny, scary, enraging, inspiring?).

Setup

We were able to get Ponder-Thesis up and running by Day 3 (Wednesday) of Thesis Week.

I sent out a simple announcement via email to come help test this new interface for annotating thesis presentations in some sort of group record to created an annotated record of what happened.

Unlike previous test sessions, there was no real opportunity to ask questions.

Results: One Size Does Not Fit All

Annotating live events is a completely different kettle of fish than annotating static video.

I had made several changes to the static video interface in preparation for thesis, but in retrospect, they weren’t drastic enough.

All of the things I’ve mentioned before that make video annotation harder work and generally less enjoyable than text annotation are repeated 10-fold when you add the live element because now slowing down to stop and reflect isn’t just onerous, it’s not an option.

As a result, the aspect of Ponder that makes it feel like a “fun puzzle” (figuring out which sentiment tag to apply) can’t be done because there simply isn’t time.

It was challenging even for me (who is extremely familiar with the sentiment tags) to figure out at what point to attach my annotation, which tag to apply *AND* write something coherent, all quickly enough so that I’d be ready in time for the next pearl of wisdom or outrageous claim in need of a response.

There was also hints of wanting to replicate the casual feel of the Shep chatroom. People wanted to say “Hi, I’m here” when they arrived.

Going forward, I would tear down the 2-column “Mine v. Theirs” design in favor of a single-column chat-room style conversation space, but I will go into more detail on next steps after reviewing the data that came out of thesis.

Donna Miller Watts presenting: Fictioning, or the Confession of the Librarian

Donna Miller Watts presenting: Fictioning, or the Confession of the Librarian

The Data

  • 36 presentations were annotated. However, 50% of the responses were made on just 6 of them.
  • 46 unique users made annotations. (Or at the very least 46 unique browser cookies made annotations.)
  • 266 annotations in total, 71 of which were { clap! }.
  • 30 unique sentiment tags were applied.
    • ???, Syntax?, Who knew?
    • How?, Why?, e.g?, Or…, Truly?
    • Yup, Nope
    • Interesting, Good point, Fair point, Too simple
    • { ! }, Ooh, Awesome, Nice, Right?
    • Spot on!, Well said, Brave!
    • { shudder }, { sigh }, Woe, Uh-oh, Doh!
    • HA, { chuckle }, { clap! }
  • At peak, there were ~19-20 people in Ponder-Thesis simultaneously.

Broken down by type, there were 39 Cognitive annotations having to do with having questions or wanting more information. 69 Analytical annotations. 158 Emotional annotations, although almost half (71) of those were the { clap! }.

Over half of the non-clap! responses had written elaborations as well (113).

  • Natasha Dzurny had the most applause at 10.
  • Sergio Majluf had the most responses at 26.
  • Kang-ting had the most emotional responses at 18.
  • Talya Stein Rochlin had the most emotional responses if you exclude applause at 14.
  • Sergio Majluf racked up the most eloquence points with 3 “Well saids!”
  • Talya Stein Rochlin had the most written commetns with 15 and the most laughs at 3.

Below are roll-ups of responses across the 36 presenters categorized by type.

  • Cognitive: Yellow
  • Analytical: Green
  • Emotional: Pink

Below is a forest-for-the-trees view of all responses. Same color-coding applies.

Forest-for-the-Trees view of  responses.

Forest-for-the-Trees view of responses.

Interaction Issues

I made a few design changes for thesis and fixed a number of interaction issues within the first few hours of testing:

  • Reduced the overall number of response tags and made them more straightforward. e.g. Huh. which has always been the short-form of “Food for thought…” became Interesting!
  • Replaced the 3rd-person version of the tags (xxx is pondering) with the 1st-person version: Interesting! because after the last test session, I felt a long list of the 3rd-person responses felt a bit wooden.
  • Added a { clap! } tag for applauding.
  • Made the “nametag” field more discoverable by linking it to clicking on the roster (list of people watching). Probably giving yourself a username should be an overlay that covers the entire page so people don’t have an opportunity to miss it.
  • As responses came in, they fill up in the “Theirs” column below the video. Once there were more comments than would fit in your window viewport, you wouldn’t see new comments unless you explicitly scrolled. We explicitly decided not to auto-scroll the list of responses for static video to automatically keep in time with the video because we thought it would be too distracting. For streaming however, auto-scroll would have just been one less thing for you to have to do while you’re trying to keep apace of the video and thinking about how to comment.

Other issues didn’t become apparent until after it was all over…

  • People didn’t see how to “change camera view.” The default view was a pretty tight shot of the presenter. Really the view you want is a wider shot that includes the presentation materials being projected on the wall behind the speaker.
  • The last test session helped me realize that for video the elaboration text field should stay open between submits. But really, it should probably start off open as it’s too small to be something you “discover” on your own while the video is going.
  • The star button which is meant to allow you to mark points of interest in the video without having to select a tag was never used. I’m not sure how useful it is without the ability to write something.

Solutions

The obvious first step is to go in and squish the remaining interaction issues enumerated above. But there are more systemic problems that need to be addressed.

Problem:

  • People wanted to say “hey” when they logged on. The “live” nature of the experience means social lubrication is more important than annotating text or video on your own. ITP Thesis is also a special case because the people annotating not only know the presenters personally but are likely sitting in the same room (as opposed to watching online.) One person said they were less likely to speak their mind on Ponder if they had something critical to say about a presentation.
  • There is also general trepidation over attaching a comment to the wrong point in the timecode. One person who is also familiar with the Ponder text response interface described the problem as “I don’t know what I’m annotating. With text, I know exactly what I’m selecting. With video, I’m doing it blind.”

Solution: Chatroom Layout

Replace the 2-column design in favor of a unified “chatroom” window that encourages more casual chatter. If the timecoding feels incidental (at this point in the presentation, someone happen to say such-and-such) then you worry less about attaching your annotation to the precisely correct moment.

Problem: Too many tags.

The sentiment tags got in the way of commenting. There were simply too many to choose from. They knew what they wanted to write, but the step of selecting a tag slowed them down. This was true of static video for those watching STEM instructional videos as well.

Solution: Reduce and Reorder

  • Slim down the tag choices, in effect trading fidelity of data (we’ll lose a lot of nuance in what we can aggregate) for lowering the bar for engagement. There should probably be something like 3, but alas I can’t figure out how to chuck out 2 of the following 5.
    • Question?!
    • Interesting
    • Are you sure about that?
    • HAHA
    • { clap ! }
  • Reorder the workflow so that you can write first and then assign tags after, not dissimilar to how people use hashtags in Twitter.

This rearrangement of steps would turn the live video workflow into the exact inversion of the text annotation experience, which is tag first then elaborate for very specific reasons that have more or less worked out as theorized.

Conclusion

The modest amount of data we gathered from this years’ thesis presentations was enough to validate the original motivation behind the experiment: Collect audience reactions in a way that can yield meaningful analysis of what happened. However there remains a lot of trial-by-error to be done to figure out the right social dynamics and interaction affordances to improve engagement. There are clear “next steps” to try and that is all you can every ask for.

The only tricky part is finding another venue for testing out the next iteration. If you have video (live or static) and warm bodies (students, friends or people you’ve bribed) and are interested in collaborating with us, get in touch! We are always on the look out for new use cases and scenarios.

New modes of interaction for Flip videos Part 3

This past semester I’ve been experimenting with new modes of interaction for video. I’ve written about 2 previous test sessions here and here.

Annotating video is hard. Video is sound, imagery moving through time. It’s an immersive and some might say brain-short-circuiting medium. Watching 3 videos simultaneously may be the norm today. However, if you’re truly engaged in watching video content, in particular content that is chock full of new and complex ideas, it’s hard to do much else.

Watching video content makes our brains go bonkers.

“’Every possible visual area is just going nuts,’ she adds. What does this mean? It shows that the human brain is anything but inactive when it’s watching television. Instead, a multitude of different cortexes and lobes are lighting up and working with each other…”

“She” is Joy Hirsch, Dir. of fMRI Research at Columbia U, being cited by the National Cable & Communications Association who interpret her results to mean watching tv is good for our brains, like Sudoku. I’m not sure about that, but it’s reasonable to conclude that consuming video content occupies quite a lot of our brain.

Of course no one is saying reading doesn’t engage the brain. However, one key difference between text and video makes all the difference when it comes to annotation: With reading, we control the pace of reading, slowing down and speeding up constantly as we scale difficult passages or breeze through easy ones.

Video runs away from us on its own schedule whether or not we can keep up. Sure we can pause and play, fast-forward and slow down, but our ability to regulate video playback can only be clunky when compared to the dexterity with which we can control the pace of reading.

In fact the way researchers describe brain activity while watching tv sounds a lot like trying to keep up with a speeding train. All areas of the brain light up just to keep up with the action.

So what does that mean for those of us building video annotation tools?

Video annotation has all the same cognitive challenges of text annotation, but it comes with additional physiological hurdles as well.

STEM v. The Humanities

I’ve been working off the assumption that responding to STEM material is fundamentally different from The Humanities. For STEM subjects, the range of relevant responses is much more limited. It essentially amounts to different flavors of “I’m confused.” and “I’m not confused.”

I’m confused because:

  • e.g. I need to see more examples to understand this.
  • Syntax! I don’t know the meaning of this word.
  • How? I need this broken down step-by-step.
  • Why? I want to know why this is so.
  • Scale. I need a point of comparison to understand the significance of this.

I get this because:

  • Apt! Thank you. This is a great example.
  • Got it! This was a really clear explanation.

Humor is a commonly wielded weapon in the arsenal of good teaching so being able to chuckle in response to the material is relevant as well.

But as is often the case when trying to define heuristics, it’s more complicated than simply STEM versus not-STEM.

Perhaps a more helpful demarcation of territory would be to speak in terms of the manner and tone of the content (text or video) and more or less ignore subject matter altogether. In other words: The way in which I respond to material depends on how the material is talking to me.

For example, the manner and tone with which the speaker addresses the viewer varies dramatically depending on whether the video is a:

  •  “How-to” tutorial,
  • Expository Lecture
  • Editorializing Opinion
  • Edu-tainment

The tutorial giver is explaining how to get from A to Z by following the intervening steps B through Y. First you do this, then you do that.

The lecturer is a combination of explanatory and provocative. This is how you do this, but here’s some food for thought to get you thinking about why that’s so.

The editorializing opinion-giver is trying to persuade you of a particular viewpoint.

Edu-tainment is well, exactly that. Delivering interesting information in an entertaining format.

And of course, the boundaries between these categories are sometimes blurry. For example, is this Richard Feynman lecture Expository Lecture? or Editorializing Opinion?

I would argue it falls somewhere in the middle. He’s offering a world view, not just statements of fact. You might say that the best lecturers are always operating in this gray area between fact and opinion.

The Test Session

So in our 3rd test session, unlike the previous 2, I chose 3 very different types of video content to test.

Documentary on The Stanford Prison Guard Experiment (Category: Edu-tainment)

A 10-minute segment of the Biden v. Ryan 2012 Vice Presidential Debate re: Medicare starting at ~32:00. (Category: Editorializing Opinion)

Dan Shiffman’s Introduction to Inheritance from Nature of Code (Category: Expository Lecture)

You can try annotating these videos on Ponder yourself:

  1. Dan Shiffman’s Introduction to Inheritance from Nature of Code.
  2. Biden v. Ryan Vice-Presidential Debate.
  3. The Stanford Prison Experiment documentary.

The Set-up

There were 5 test subjects, watching 3 different videos embedded in the Ponder video annotation interface in the same room, each on their own laptop with headphones. That means unlike previous test sessions, each person was able to control the video on their own.

Each video was ~10 minutes long. The prompt was to watch and annotate with the intention of summarizing the salient points of the video.

2 students watched Dan Shiffman’s Nature of Code (NOC) video. 2 students watched the documentary on the Stanford Prison Experiment. And 1 student watched the debate.

The Results

The Stanford Prison Experiment had the most annotations: 15/user versus 12 for NOC and 5 for the debate, and the most varied use of annotations: 22 total versus 5 for NOC and 4 for the debate.

Unsurprisingly the prison documentary provoked a lot of emotional reactions (50% of the responses were emotional – 12 different kinds compared to 0 emotional reactions to the debate).

Again unsurprisingly, the most common response to the NOC lecture was “{ chuckle },” it was 12 of the 25 responses. There was only 1 point of confusion around, a matter of unfamiliarity with syntax: “What is extends?”

This was a pattern I noted in the previous sessions where in many STEM subjects, everything makes perfect sense in the “lecture.” The problem is oftentimes as soon as you try to do it on your own, confusion sets in.

I don’t think there’s any way around this problem other than to bake “problem sets” into video lectures and allow the points of confusion to bubble up through “active trying” rather than “passive listening.”

Intro to Inheritance - NOC Intro to Inheritance – NOCBiden v. Ryan Vice-Presidential Debate Biden v. Ryan Vice-Presidential Debate Stanford Prison Experiment Stanford Prison Experiment

Less is More?

There are 2 annotation modes in Ponder. 1 displays a small set of annotation tags (9) in a Hollywood Squares arrangement. A second displays a much larger set of tags. Again the documentary watchers were the only ones to dive into the 2nd set of more nuanced tags.

Less v. More Less v. More

However, neither student watching the documentary made use of the text elaboration field (they didn’t see it until the end) where you can write a response in addition to applying a tag whereas the Nature of Code and Biden-Ryan debate watchers did. This made me wonder how having the elaboration field as an option changes the rate and character of the responses.

Everyone reported pausing the video more than they normally would in order to annotate. Much of the pausing and starting simply had to do with the clunkiness of applying your annotation to the right moment in time on the timeline.

It’s all in the prompt.

As with any assignment, designing an effective prompt is half the battle.

When I tested without software, the prompt I used was: Raise your hand if something’s confusing. Raise your hand if something is especially clear.

This time, the prompt was: Annotate to summarize.

In retrospect, summarization is a lot harder than simply noting when you’re confused versus when you’re interested.

Summarization is a forest-for-the-trees kind of exercise. You can’t really know moment-to-moment as you watch a video what the salient points are going to be. You need to consume the whole thing, reflect on it, perhaps re-watch parts or all of it and construct a coherent narrative out of what you took in.

By contrast, noting what’s confusing and what’s interesting is decision-making you can do “in real-time” as you watch.

When I asked people for a summarization of their video, no one was prepared to give one (inspite of the exercise) and I understand why.

However, one of the subjects who watched the Stanford Prison Experiment documentary was able to pinpoint the exact sentence uttered by one of the interviewees that he felt summed up the whole thing.

Is Social Always Better?

All 3 tests I’ve conducted were done together, sitting in a classroom. At Ponder, we’ve been discussing the idea of working with schools to set up structured flip study periods. It would be interesting to study the effect of socialization on flip. Do students pay closer attention to the material in a study hall environment versus studying alone at home?

The version of Ponder video we used for the test session shows other users’ activity on the same video in real-time. As you watch and annotate, you see other people’s annotations popping up on the timeline.

For the 2 people watching the Stanford documentary, that sense of watching with someone else was fun and engaging. They both reported being spurred on to explore the annotation tags when they saw the other person using a new one. (e.g. “Appreciates perspicacity? Where’s that one?”)

By contrast, for the 2 people trying to digest Shiffman’s lecture, the real-time feedback was distracting.

I assigned an annotation exercise to another test subject to be done on her own time. The set-up was less social both in the sense that she was not sitting in a room with other people watching and annotating videos and she was also not annotating the video with anyone else via the software.

I gave the same prompt. Interestingly, from the way she described it, she approached the task much like a personal note-taking exercise. She also watched Shiffman’s Nature of Code video. For her, assigning predefined annotation tags got in the way of note-taking.

Interaction Learnings

  • The big challenge with video (and audio) is that they are a black box content-wise. As a result, the mechanism that works so well for text (simply tagging an excerpt of text with a predefined response tag) does less well on video where the artifact (an annotation tag attached to timecode) is not so compelling. So I increased emphasis on the elaboration field, keeping it open at all times to encourage people to write more.
  • On the other hand, the forest-for-the-trees view offered on the video timeline is I think more interesting to look at than the underline heatmap visualization for text so I’ll be looking for ways to build on that

    Timeline

    Timeline

  • There was unanimous desire to be able to drag the timecode tick marks after they had already submitted a response. We implemented that right away.
  • There was also universal desire to be able to attach a response to a span of time (as opposed to a single moment in time). The interaction for this is tricky, so we’ve punted this feature for now.
  • One user requested an interaction feature we had implemented but removed after light testing because we weren’t sure if it would prove to be more confusing than convenient: automatically stopping the video whenever you made mouse gestures indicating you’re intending on creating an annotation and then restarting the video as soon as you finished submitting. I’m still not sure what to do about this, but it supports the idea that the difficulty of pacing video consumption makes annotating and responding to it more onerous than doing the same with text.

Takeaways

  1. Annotating video is hard to do so any interaction affordance to make it easier helps.
  2. Dense material (e.g. Shiffman’s lecture) is more challenging to annotate. Primary sources (e.g. the debate) are also challenging to annotate. The more carefully produced and pre-digested the material (e.g. the documentary), the easier it is to annotate.
  3. With video, we should be encouraging more writing (text elaborations of response tags) to give people more of a view into the content.
  4. Real-time interaction with other users is not always desirable. Users should be given a way to turn it on/off for different situations.
  5. There may be a benefit to setting up “study halls” (virtual or physical) for consuming flip content, but this is mere intuition right now and needs to be tested further.

Last but not least, thank you to everyone at ITP who participated in these informal test session this semester and Shawn Van Every and Dan Shiffman for your interest and support.

Logging Tutorial

New modes of interaction for Flip videos Pt. 2

This semester in addition to teaching, I am a SIR (Something-in-Residence) at ITP, NYU-Tisch’s art/design and technology program.

My mission for the next 3 months is to experiment with new modes of interaction for video: both for the flip video and live events. (Two very different fish!)

User Study No. 2

I recently wrote about my first User Study with students from Dan Shiffman’s Nature of Code class. A couple of weeks ago, 3 students from Shawn Van Every’s class, Always On, Always Connected volunteered to watch 4 videos. In this class, students design and build new applications which leverage the built-in sensor and media capabilities of mobile devices.

The Setup

Again, there were no computers involved. We screened the videos movie-style in a classroom. Instead of asking for 2 modes of interaction (Yes that was super clear! versus Help!), I asked for a single mode of feedback: “Raise your hand when you’ve come across something you want to return to.”

The 4 videos introduce students to developing for the Android OS. It’s important to note that simply by virtue of working in the Android development environment (as opposed to Processing) the material covered in these videos is much more difficult than what was covered in the Nature of Code videos the previous week. However, the students’ level of programming experience is about the same.

What happened…

Video 1: Introduction to logging

Zero hands. But from the looks on people’s faces, it was clear it was not because everyone was ready to move on. A few things to note:

  1. For screencasts, watching a video movie-style is extra not-helpful as it is difficult to read the text.
  2. The video begins promptly after a 20s intro. With instructional videos, the emphasis is always on brevity (the shorter the better!) In this case however, I wonder if 20s isn’t enough to allow you to settle down and settle into the business of trying to wrap your head around new and alien concepts. I’m sure #1 made it harder to focus as well.
  3. Reading code you didn’t write is always challenging and I think even more so on video where we’re much more tuned into action (what’s changing) and much less likely to notice or parse what’s static.
  4. Unlike before, when I asked after-the-fact, “Where in the video do you want to go back to?” the students were unable to respond. Instead the unanimous response was, “Let’s watch the entire video again.” This is where collecting passive data about the number of times any given second of a video is watched by the same person would be helpful.
  5. In general, questions had to do with backstory. The individual steps of how to log messages to the console were clear. What was missed was what and why. First, what am I looking at? And second, why would I ever want to log messages to the console? I say “missed” and not “missing” because the answer to those questions were in the videos. But for whatever reason, they were not fully absorbed.
  6. Last but not least, I have to imagine that this watching as a group and raising your hands business feels forced if not downright embarrassing.

Hopefully, a software interface for doing the same thing will elicit more free-flowing responses from students as it will provide them with a way to ask questions “in private.”

Videos 2-4

Everyone was more settled in after watching the logging video. Each subsequent video built on the momentum accumulated in the previous one. With Video 2, I started to get some hand-raising starting with video 2. when we returned to those points in the video, people were very specific about what was confusing them.

“Why did you do that there?” or it’s converse, “Why didn’t you do that there?” was a common type of question as was “What is the significance of that syntax?”

Another way to look at it is: There were never any issues with the “How.” The steps were clearly communicated and video is a great medium for explaining how. The questions almost always had to do with “Why?”, which makes me wonder if this is the particular challenge of video as a medium for instruction.

Does learning “Why?” require a conversation initiated by you asking your particular formulation of “Why?”

Video 2: Toasts

Video 3: Lifecycle of an Android App Part 1

  • @2.00: What is the significance of @-id? Why aren’t you using the strings file for this?
  • @6:50: Why did you change arg0 to clickedView?

Other syntax questions included:

  • What’s the difference between protected versus public functions?
  • What’s extend and implement?
  • What’s super()?
  • What’s @override

All of these syntax questions pointed to a much larger / deeper rabbit hole having to do with object-oriented programming and encapsulation, a quick way to derail any getting started tutorial.

In general though, I don’t think these syntax questions prevented you from understanding the point of the videos which were about creating pop-up messages (Video 2) and the lifecycle of Android apps (Videos 3 and 4) (when do they pause versus stop, re-initialize versus preserve state).

Video 4: Lifecycle of an Android App Part 2

In Video 4, there were a lot of nodding heads during a section that shows the difference between pausing/resuming and killing/creating an Android app. The demonstration starts around @2:20 and goes for a full minute until @3:23. Epic in online video terms. It’s followed by a walk-through of what’s happening in the code and then reinforced again through a perusal of Android documentation @3:50 where an otherwise impenetrable event-flow diagram is now much more understandable.

It’s also important to note that both the demo which starts at @2:20 and the documentation overview @3:50 are preceded by “downtime” of trying, failing and debugging starting the Android emulator and navigating in the browser to find the flow diagram.

In general there’s a lot of showing in these videos. Each concept that’s explained is also demonstrated. This segment however was particularly lengthy and in accordance with the law of “It-Takes-A-While-To-Understand-Even-What-We’re-Talking-About” (#2 above), my completely unverified interpretation of events is that the length of the demonstration (far from being boring) helped everyone sink their teeth into the material.

What’s Next and Learnings for Design

As we head into testing actual software interfaces, this 2nd session gave me more concrete takeaways for workflow.

  1. You lay down “bookmarks” on the timeline as you watch to mark points you would like to return to.
  2. You lay down “bookmarks” on the timeline after you’ve watched the video to signal to the instructor what you would like to review in class the next time you meet.
  3. You can expand upon “bookmarks” with an actual question.
  4. You select from a set of question tags that hopefully help you formulate your question. (More to come on this.)

While it’s important to break down the videos into digestible morsels, it’s also important to make it easy to watch a few in a row as the first one is always going to be the most painful one to settle into. There are ways to encourage serial watching with user interface design. (e.g. playlist module, next video button, preview and auto-play the next video.)  But perhaps something can be done on the content side as well by ending each video on a so-called “cliffhanger.”

 

Magnitude! Direction!

New modes of interaction for Flip videos Pt. 1

This semester in addition to teaching, I am a SIR (Something-in-Residence, no joke) at ITP, NYU-Tisch’s art/design and technology program.

My mission for the next 3 months is to experiment with new modes of interaction for video: both for the flipped classroom and live events. (Two very different fish!)

Magnitude! Direction!

Magnitude! Direction!

2 weeks ago, I conducted the first in a series of informal user studies with 3 students from Dan Shiffman’s Nature of Code class which introduces techniques for modeling natural systems in the Java-based programming environment Processing .)

User Study No. 1

Last year, Dan flipped his class, creating a series of ~10 minute videos starting with how to use random() to move objects around a screen to modeling the movement in ant colonies.

The Setup

We viewed the first 2 videos from Chapter 1 which introduce the concept of a vector and the PVector class in Processing.

There were no computers involved. We watched it together on a big screen, just like you would a movie except I sat facing the students.

The only “interaction” I asked of them was to:

  1. Raise their right hand when Dan said something extra clear.
  2. Raise both hands to indicate “Help!”

What happened…

On the whole, the “interaction” was a non-event. There were a few tentative raisings of the right hand and zero raisings of both hands.

However, after each video when I asked, at what points in the video were things not perfectly clear? There were no hesitations in the replies.

Points of confusion fell mostly into 1 of 2 categories:

  1. I simply need more background information.
  2. Maybe if I rewatch that part it will help.

Although I will say that in my subjective option, number 2 was said in a tentative and theoretical fashion, which is interesting because “the ability to re-watch” is a much-vaunted benefit of flip.

It must be said though, that we’re early in the semester and the concepts being introduced in these videos are simple relative to what’s to come.

Learnings for design…

Some early thoughts I had coming out of this first session were:

  • Unlike reading, video is too overwhelming for you to be able to reflect on how you’re reacting to the video while you’re trying to take it in. (Just compare reading a novel to watching a movie.)
  • That being said, so long as the video is short enough, people have a pretty good idea of where they lost their way, even after the fact.
  • Still, there needs to be a pay-off for bothering to register where those points are on the video. For example, “If I take the time to mark the points in the video where I needed more information, the instructor will review them in class.”
  • It’s still unclear to me how a “social” element might change both expectations and behavior. If you could see other people registering their points of confusion and asking questions and you could simply pile on and say “I have this question too” (many discussion forums have this feature), the whole dynamic could change.
  • Hence, the popularity of inserting slides with multiple choice questions every few minutes in MOOC videos. The slides serve as an explicit break to give the viewer space to reflect on what they’ve just watched.

I wonder though if they need to be questions. Genuinely thought-provoking questions in multiple choice form are hard to come by.

In a flip class where there is such a thing as in-class face time, a simple checklist of concepts might suffice.

Here were the concepts that were covered. Cross off the ones you feel good about. Star the ones you’d like to spend more time on. I think it’s key that you are asked to do both actions, meaning there shouldn’t be a default that allows you to proceed without explicitly crossing off or starring a concept.

Concepts for Video 1.1:

  • A vector is an arrow ★ @2:00 : Why is it an arrow and not a line?
  • A vector has magnitude (length).
  • A vector has direction (theta) ★ @3:30: What’s theta?
  • A vector is a way to store a pair of numbers: x and y.
  • A vector is the hypotenuse of a right triangle (Pythagorean Theorem).

Concepts for Video 1.2:

  • PVector is Processing’s vector class.
  • How to construct a new PVector.
  • Replacing floats x,y with a PVector for location.
  • Replacing float xspeed, yspeed with a PVector for velocity.
  • Using the add() method to add 2 PVectors.
  • Adding the velocity vector to the location vector.
  • Using the x and y attributes of a PVector to check edges.

The tricky thing with Video 1.2 is that the only point of confusion here was @10:27 when Dan says almost as an aside that you add the velocity vector to the location vector which you can think of as a vector from the origin. I don’t think this kind of problem area would surface in a list of concepts to cross out or star.

Next Session

In session 2, I will be conducting another user study with students from Shawn Van Every’s class “Always On, Always Connected” (an exploration of the technologies and designs that keep us online 24/7).

My plan is to try asking the students to raise their hand simply to register: I want somebody to review what happened at this point in the video.

We’ll see how it goes!

Realizing Flip: Ponder for Video and eBooks

Pondering Video and/or eBooks for your class? Sign up for a pilot to alpha test.

Pondering Video?

If you’re flipping your classroom or simply have a lot of video content you’d like your students to watch outside of class, Ponder will soon be a way for you to engage and track student activity around video.

Just like Ponder for reading, Ponder for Video doesn’t require you to upload anything. Ponder for video will work on Youtube or Vimeo or Dropbox or Google Drive. Just like Ponder for reading, you will be able review visualizations of your students’ responses to the video along the video timeline. Even better than Ponder for reading, you will be able to manage a question queue and see where students are getting stuck watching and re-watching the same segment.

Pondering the Ring of Gyges

Pondering the Ring of Gyges

Pondering eBooks?

Ponder already works on any text that renders in a browser (including pdfs!), but we’ve been hankering after a way to organize Ponder activity around chapters and sub-sections for longer documents like books. So we were excited to discover an EPUB-lishing service called Thuze that integrates nicely with the Ponder browser add-on. Ponder + Thuze means you will be able to read eBooks in the Thuze web reader and organize Ponder activity around chapters and sections of long texts. (Continue reading)

Federalist Papers as an ePub

With Thuze, Ponder works on ePubs just as you would expect

Sign up!

We are looking for teachers and professors interested in trying out Ponder in these new contexts and providing us with feedback on the myriad ways it works and doesn’t work with your class.

For more information, fill out this short Google Doc form.

Learnings from the pilots will (of course!) be incorporated into the product and released for everyone.

Learnings from the Classroom: Visualizing Reactions on Reading Assignments

Recently I wrote about the lessons we’re learning from our first K12 pilots this semester.

Our biggest challenge thus far has been adapting Ponder, which was originally designed around self-directed reading scenarios, to assigned reading.

Whereas a really active self-directed article might provoke a dozen or so responses…assigned reading can generate 1-200 responses from a class of 20 students.

This can easily overwhelm both the feed and the article page itself. In my last post I wrote about how we’re starting to ameliorate the issues in the feed.

Color Coded Sentiments

Red Light, Green Light, Yellow Light: React, Evaluate, Comprehend.

We’ve also recently shipped a change to the browser add-on to provide teachers and students with a forest (as opposed to the trees) view of student responses.

Those of you using Ponder might have noticed that our Sentiment tags in the Ponder response box are color coded.

We’re now using those colors on the article page itself, so you can see at a glance, where students are responding emotionally, where they’re having comprehension issues and where they’re exercising judgement.

Yellow are responses having to do with basic comprehension or incomprehension as the case may be of the reading:

What does this mean? I’d like examples. I need a break down.

Green are responses that pass judgement through evaluation:

This is hyperbole, oversimplification, insight!

Red are responses that express some kind of emotional reaction:

Disapproval, regret, admiration.

The tick marks are on the right give you a sense of the activity level across the entire reading, be it a one-page article or a 100-page essay.

Visualizing Sentiments By Type

Visualizing Sentiments By Type

It’s a small step, but it’s the kind of thing we want to do more of to help teachers get a quick sense of how the class responded as a whole to the reading.

 

 

 

 

Learnings from the Classroom: The difference between self-directed and assigned reading.

We’ve been iterating on and refining Ponder in the higher ed classroom for two years now and it’s been really interesting to compare that experience to the past two months of watching our K12 classes get going. (Early on, we hit an IT-related snag at the WHEELS Academy. Now we’re getting to the good stuff that has to do with how students are actually using Ponder to do close reading and how a teacher might use it to evaluate their students.

In many respects the K12 classroom is much more demanding than higher ed, though both present the challenge to us of figuring out:

How to make Ponder work for both self-directed *and* assigned reading.

What are the key differences?

One of the features that’s worked out really well for self-directed reading is that unlike most social media feeds which are built around individuals, the Ponder Feed rolls up student responses by article. That means in the feed, you quickly get a sense of where the conversations are happening even if students happen upon the same article independently.

However with assigned readings where even short two page articles can generate over a hundred student responses, rolling up responses by article is just disorienting and overwhelming and fails to provide teachers with a quick way to evaluate each student’s understanding of the reading.

3 classes in particular really helped us understand the problem better: Mr. V’s 9th grade Global Studies class at Stuyvesant H.S., Ms. Perez’s 8th grade English class at xxx in Chicago and Tom Lynch‘s graduate-level Curriculum Development and Instruction Planning with Technology class at the Pace University School of Education.

We knew this was going to be a problem but it wasn’t clear to us how best to address this issue quickly until the first assigned reading responses began to roll in…

As a quick fix we re-collated assigned reading responses around the student. It’s an improvement on what we had before. But it’s not entirely clear this is the best solution. We’ve gained clarity around how each student responded to the text. But we’ve lost the thread of conversation, how are students responding to each other.

The path to supporting assigned reading well is going to be a steep and rocky one, but we know the only way to negotiate it is through trial and error and paying close attention to what’s going on in our classrooms.

2 responses from the same student.

Multiple Choice v. Free Response

Is multiple choice really so bad?

For the humanities, we answer unequivocally: “sorta.” If any of the truly important questions about the human condition had unambiguously correct answers, human history would have been a long boring tale of comity and well-being.

As a result, any attempt to assess subjects in the humanities through multiple choice cannot and do not broach the questions dearest to humanists. Instead they must skirt and skulk around looking for secondary signs of an engaged and thinking mind. We ask our students to identify metaphors and supporting arguments without also asking: How evocative is the metaphor? How convincing is the argument?

Why? Because multiple choice questions as we all know must by definition contain at least one unambiguously correct answer. And we also know, the more interesting the question the less clear the answer.

Still, is there a way to do multiple choice that’s open-ended and allows for asking questions with no right answer? We think we have a (nuanced, textured) answer to that question. We have embraced multiple choice, sorta …by turning it inside-out and upside-down.

In Ponder multiple choice, the questions are predefined and the possible answers are infinite. The questions are not questions for the students (e.g. Can you identify the topic sentence of this paragraph?), they’re questions for the author of the text, opportunities to “talk back” to the reading and it’s up to the student to figure out where and how to ask them.

After all, we learn by asking questions, not answering them.

Our readers have a more important task than simply finding examples of Metaphor versus Simile, Hubris versus Pride, Compromise versus Conciliation (themes predefined by teachers). They are also asked to identify examples that perplex them, intrigue them, shock, disgust, inspire, agitate, make them wonder if someone might be a touch hysterical or someone else is oversimplifying something that deserves more serious consideration (reactions predefined by us). They don’t simply observe and identify, they analyze and evaluate, both the author’s ideas and their own reactions to those ideas. In a word, they think.

Still, assessment is simple. Though Ponder is not a Scantron machine that can tell you automatically who was right and who was wrong, it’s concise, data-rich, and designed to call attention to good work. It is easy for both teachers and classmates to evaluate each response.

  1. Has it substance?
  2. Is the reaction apt?
  3. Are the themes apt?

If not, let’s talk about it!

2 responses from the same student.

Same student, 2 different responses. Both are invitations if not provocations to ask Why?

And Ponder is only getting smarter. While we can’t pass absolute judgement on student responses, what we can do automatically is build a nuanced profile of each student over time.

  • Who’s taking the time to read in-depth articles?
  • Who’s expanding beyond their comfort zone to read about new subject areas?
  • When confronted with something confusing, who’s able to identify exactly how they’re confused?
  • Who’s reacting emotionally? Who’s able to evaluate the soundness of logic?
  • Who’s figured out how to get their classmates interested in what they’re interested in?
  • Who’s good at starting conversations?

What we’re interested in is the ability to paint a portrait of readers that reflects their level of curiosity, comprehension, self-awareness and awareness of others.

We can’t produce a number with the finality of a Scantron machine.

But really, what does a 67 versus an 83 really mean when we’re talking about the Bill of Rights or the Leaves of Grass?

You’ll be able to explain that number better with the insights Ponder affords you into your students’ thinking. Multiple choice is really not so bad if you don’t let it kill the open-ended nature of intellectual inquiry. We like it, sorta.