Ponder Sidebar

Forest(s) for the Trees: Filtering Ponder Heatmaps

Ponder provides a place to collect and share your thoughts about your reading, but what to do when you’ve collected a lot of thoughts on a particular piece? Even tens of responses on a single page can get overwhelming, and groups of students often create hundreds, so we’ve added some tools to make it easier to navigate them.Familiar Tick-marks

We love our tick marks and the quick overview they give you, so you’ll find them in their usual place on the right side of the window. As before, there’s one tick mark per excerpt that elicited at least one response, and the colors match the type of sentiment. You’ll also still find each selection underlined in the page, so you’ll see and can reply to them as you’re reading.

Introducing the Ponder Sidebar

As before, clicking a tick mark or underline will scroll your window to the location of the corresponding text in the document, but it will also expand the Ponder sidebar where the new review tools live. (If you need to dismiss the sidebar, just click anywhere outside the sidebar.)

Ponder Sidebar

In the sidebar, you’ll see a list of all the excerpts from your groups. Similar to your feed, all the responses for a given excerpt are bundled together in a “nugget”. When the sidebar opens, the nugget for the tick mark you clicked will be highlighted. You’ll also see some summary stats and drop-downs – more on that in a moment.

Anatomy of a Nugget

The nugget shows the sentiment of the user who made the first response on that excerpt, in this case, badtz appreciates the eloquence of the statement “We are not interested in students just picking an answer, but justifying the answers.” Sidebar Nugget

At the bottom, you can see that 1 other user has replied to Badtz’s comment, and then a green box with a 1 and a yellow box with a 1. Each box indicates the number of responses with each sentiment type. In this case, Badtz’s response was a green/analytical comment. Clicking on the ellipsis exposes the details of the yellow/cognitive reply.

Replying and removing responses

Mousing-over the nugget gives you the option to add your own response to this excerpt (Respond/Update), or remove it (the X).

Embed Respond Box

 

 

 

 

 

 

 

Sorting and Filtering 

But what if there are a bunch of responses? We’ve added the ability to sort and filter to make it easier to review responses. At the top of the sidebar, you now see summary metrics for the document – the total number of excerpts annotated and the number of annotations on those excerpts. Using the drop-downs at the top, you can filter those responses by group, responder, sentiment, and theme.

The first drop-down allows you to filter the responses by group; for example, so a teacher can see one section at a time. The numbers in parentheses indicates the number of responses created by that group.

Group Filter

 

 

 

 

Want to see just your responses, or those from a particular student?  The second drop-down shows each responder, sorted by the number of responses they created which are indicated in parenthesis adjacent to each username.

User Filter

 

 

 

 

The third drop-down shows the mix of sentiments used in the responses, sorted by frequency (indicated in parenthesis), and allows you to filter for them.

Sentiment Filter

 

 

 

 

And the fourth drop-down shows the themes used in responses on the document, sorted by frequency indicated in parenthesis:

Theme Filter Dropdown

 

 

 

 

 

The filters work together and filter each other; for example, when you filter for a particular group, the other filters will only include the users, sentiments, and themes on activity for that group.

Lastly, underneath the filters is the sort drop-down.Sort Dropdown

  • # of Replies sorts the nuggets by the number of replies that occurred on each.
  • # of Themes sorts all of the excerpts by the number of themes that were tagged to each.
  • Controversy sorts the excerpts by the measure of disagreement based on sentiment and sentiment type usage on each.
  • Last Updated shows the most recently updated nuggets first.

 

As you can see, much of the Ponder power you are familiar with when navigating ideas across documents are now available for minute dissections of a single document or passage.

And don’t forget, these capabilities are all available for custom integration on your platform through the Ponder API.

Hassle Factor: University Web Sites

Applying Behavioral Economics to College Success

Last week we attended a workshop run by the consulting firm Ideas42 on behalf of the Robin Hood Foundation. The goal of the day-long event was to provide teams with an overview of behavioral economics, the study of the psychological, social, cognitive and emotional factors that go into decision-making.

Students lose their way in school for a broad range of reasons, many of which are non-academic, and the hope behind Robin Hood’s prize is that technological solutions can help them scale already proven strategies for improving matriculation rates.

The day began with an overview of the day to come by ideas42’s Josh Wright:

  • Psychological “Scarcity” – how even unrelated stress and anxiety levy a cognitive tax.
  • Hassle Factors – how seemingly insignificant logistical challenges can discourage and therefore effectively prevent task completion.
  • Limited Attention – Information overload.
  • Self-control – how to shore up self-control through social bonds, incentives and tricks you play on your own psyche.
  • Prospective Memory – not only remembering to do something, but following through to actually do it.
  • Social Norms – the human tendency to choose “normal” over “right”.

On the whole the sessions were lively, peppered with informal experiments, anecdotes and studies that illustrated key points through examples rather than jargon and formal definitions. Every session provoked incisive questions from the audience. For our part, we walked away with much more specific ideas for the design and implementation of our solutions as well as a host of questions for the folks at Robin Hood and CUNY, mostly around how the program will be introduced to the students.

Scarcity

NYT: The Mental Strain of Making Do With Less

The Mental Strain of Making Do With Less Mullainathan, NYT

Sendhil Mullainathan’s well-argued presentation on the Psychology of Scarcity made abundantly clear how poverty in one area of life (financial) creates poverty in another (academic performance). Study after study showed how even subtle reminders of financial stresses degraded cognitive performance. (You can get a synopsis from his New York Times piece on the same topic.)

  • The obvious next question to ask then was: Can you prime in the opposite direction? Can you help people forget their stresses and perform better than they would otherwise? The answer? Nothing conclusive so far, although an interesting study found that activating Asian female students’ positive (ethnic) vs negative (female) stereotypes affected their quantitative performance which suggests it may be possible here too.
  • Given that the nature of our relationship with the students will be long-term, another question we had was: Does the effect of priming wear off over time with repeated exposure, positive or negative?
  • Another issue this brought up for us is whether the mere fact that students are participating in this program remind them of their “remediation” status thereby undermining our efforts to bolster their performance? As we understand to date, only remedial students will be using our technologies. Are we missing an opportunity to build technologies that help remedial students feel a part of the CUNY community as a whole?

Filling out Gigantic Forms

William Congdon talked about hassle factors. We could all relate to the hassles of coordinating calendars that span different aspects of life (work, school, childcare, family, commuting). Ideas42 in particular is working on improving the onerous process of applying for financial aid. Two approaches that came up repeatedly was the idea of 1) defaults to reduce the cognitive load of making decisions; and 2) pre-filled out forms to remove the hassle factor of having to “look up” information the university already has. So we’re wondering:

  • Will participation be mandatory or will students be asked to decide? If the latter, will their participation be assumed with the option to opt out or will they be asked to opt-in?
  • Will students be pre-registered, or will they need to go sign up? Can we piggy-back on their CUNY accounts?
  • Will we have API access to student schedules and class syllabi, or will we need to ask the student to provide that information to us separately?

Information Overload

William also covered Limited Attention, a familiar topic in modern day life. One interesting tidbit from this session: It’s generally assumed that students’ preferred mode of communication is SMS. However, like Twitter, email before it and perhaps Yo! to come, what happens when every system and organization shifts to using text messages? More specifically, how will our communications with students interact with / collide with CUNY’s existing student support program START?

Hey, remind me to…

Matt Darling presented on Prospective Memory, the art of following through on future commitments. Memory, or the lack of it, is clearly the first problem to overcome. But assuming you are able to implement some kind of reminder system, how do you actually make those reminders count? Hassle factors and self-control (see below) come into play for sure. But Matt pointed to the power of “being specific” as one simple technique that doesn’t rely on the student to be more disciplined.

It made us reconsider how we’re thinking about designing our reminders. When you send them, how often you send them and the language you use in them of course remain important factors. But what exactly are you reminding the student to do, and how do you want them to respond to the reminder is where the real design problem lies.

Specifically for us, we’re working on ways to make tasks more concrete and bite-sized (aka, doable), tasks students can easily imagine completing successfully in a limited amount of time.

Creating Community

Allie Rosenbloom reminded us of the now famous marshmallow test.  Self-control or willpower is a tricky issue in light of Sendhil’s ideas about scarcity. In an environment of scarcity, there simply isn’t a lot of self-control to go around. Social supports and personal incentives (e.g. betting against yourself) were 2 approaches discussed. The challenge we see ahead is how to create social supports through our technology when the students who will be using our service may or may not be in the same classes or even campus due to the structure of the Randomized Controlled Trial (RCT). We are encouraged though that the student population will be big enough that we can build community around shared interests and career aspirations, if not coursework. Allie’s talk also supported the idea that “getting specific” with tasks would be a boost to performance because as we all know, focusing on “exercising today” is a lot easier than thinking about the 20 days of exercise you committed to for the month.

What is everyone else doing?

Finally, social norms come into play – the emphasis of the studies noted here had to do with public service announcements intended to discourage problematic behaviors that end up reinforcing them. Examples included posters designed to discourage binge drinking that make the reader who doesn’t drink feel like they are abnormal, since everyone else must clearly be drinking, or provoke petrified tree theft rather than discourage it.

  • Most relevant here is that commuting community college students (which is the majority of them) often feel isolated, and don’t have a good sense of how other students are handling the challenge of college. Social norms seem most relevant to us in terms of Ponder providing an atmosphere where students feel connected with their classmates, are aware they are working hard, and engage with one another through their college and career interests. We wondered if we could coordinate with existing CUNY support services to reinforce the somewhat disparate nature of the randomized participating students, and provide an in-person, face-to-face experience.

We’re thinking about how we can use the data we have about student progress to reshape students’ sense of “what’s normal” when it comes to school. Our goal would be to not only show students how others like them are succeeding in school, but to also paint a realistic picture of how much time and effort it takes to succeed at school. At the very least we can prevent students from feeling discouraged because it takes them ‘too long’ to study; or because they feel uniquely selfish in spending so much time on school in light of their other obligations.

At the beginning of the day David Crook from CUNY voiced his enthusiasm for the teams and the prize; having had an opportunity to digest all of the above, we can’t help thinking it would be great to have a second webinar to drill into the data with this new perspective.

All in all I think I’ve demonstrated here that the workshop provided much food for thought and advanced our thinking greatly towards our prototype for January. We also finally got an opportunity to meet and learn from the other teams in the Challenge. Thank you Robin Hood and ideas42 for organizing!

 

Donna Miller Watts presenting: Fictioning, or the Confession of the Librarian

New modes of interaction for Flip: Annotating streaming video!

At the end of ever Spring Semester, the extended ITP community gathers round for a solid week (Monday-Friday, 9:40AM-6PM) of thesis presentations. It’s part gauntlet and part send-off for the graduation class.

This year, with the help of Shawn Van Every (the original architect, builder and maintenance man of the ITP Thesis Streaming Service), I had the opportunity to continue my run of informal experiments in video annotation by trying it out on a handful of thesis presentations.

For the third year running, Thesis Week has been accompanied by the Shep chatroom, created by ITP alumn Steve Klise. Shep immediately turned itself into a great place for backchannel commentary on the presentations…or not. I’ve always felt it would be great to see aggregations of Shep activity attached to the timecode of the presentation videos. Shep conversations unfortunately aren’t logged. I also wondered if turning Shep into some kind of “official record” of audience reactions to thesis would be something of a killjoy.

With the Ponder-Thesis experiment, I wasn’t exactly sure where “annotating thesis presentations” would fall on the work-fun spectrum.

It might be seen as some sort of “semi-official record.” That would make it “work,” and perhaps a bit intimidating, like volunteering to close-caption T.V. programs.

But annotating with Ponder requires more thinking than close-captioning which presumably will soon be replaced by voice recognition software if it hasn’t been already. So maybe it can be fun and engaging in a the same sort of “challenging crossword puzzle” way that Ponder for text can be.

Either way, the end-goal was clear, I was interested in some sort of read out of audience reactions: Cognitively (Did people understand the presentation?); Analytically (Was it credible, persuasive?); and Emotionally (Was it funny, scary, enraging, inspiring?).

Setup

We were able to get Ponder-Thesis up and running by Day 3 (Wednesday) of Thesis Week.

I sent out a simple announcement via email to come help test this new interface for annotating thesis presentations in some sort of group record to created an annotated record of what happened.

Unlike previous test sessions, there was no real opportunity to ask questions.

Results: One Size Does Not Fit All

Annotating live events is a completely different kettle of fish than annotating static video.

I had made several changes to the static video interface in preparation for thesis, but in retrospect, they weren’t drastic enough.

All of the things I’ve mentioned before that make video annotation harder work and generally less enjoyable than text annotation are repeated 10-fold when you add the live element because now slowing down to stop and reflect isn’t just onerous, it’s not an option.

As a result, the aspect of Ponder that makes it feel like a “fun puzzle” (figuring out which sentiment tag to apply) can’t be done because there simply isn’t time.

It was challenging even for me (who is extremely familiar with the sentiment tags) to figure out at what point to attach my annotation, which tag to apply *AND* write something coherent, all quickly enough so that I’d be ready in time for the next pearl of wisdom or outrageous claim in need of a response.

There was also hints of wanting to replicate the casual feel of the Shep chatroom. People wanted to say “Hi, I’m here” when they arrived.

Going forward, I would tear down the 2-column “Mine v. Theirs” design in favor of a single-column chat-room style conversation space, but I will go into more detail on next steps after reviewing the data that came out of thesis.

Donna Miller Watts presenting: Fictioning, or the Confession of the Librarian

Donna Miller Watts presenting: Fictioning, or the Confession of the Librarian

The Data

  • 36 presentations were annotated. However, 50% of the responses were made on just 6 of them.
  • 46 unique users made annotations. (Or at the very least 46 unique browser cookies made annotations.)
  • 266 annotations in total, 71 of which were { clap! }.
  • 30 unique sentiment tags were applied.
    • ???, Syntax?, Who knew?
    • How?, Why?, e.g?, Or…, Truly?
    • Yup, Nope
    • Interesting, Good point, Fair point, Too simple
    • { ! }, Ooh, Awesome, Nice, Right?
    • Spot on!, Well said, Brave!
    • { shudder }, { sigh }, Woe, Uh-oh, Doh!
    • HA, { chuckle }, { clap! }
  • At peak, there were ~19-20 people in Ponder-Thesis simultaneously.

Broken down by type, there were 39 Cognitive annotations having to do with having questions or wanting more information. 69 Analytical annotations. 158 Emotional annotations, although almost half (71) of those were the { clap! }.

Over half of the non-clap! responses had written elaborations as well (113).

  • Natasha Dzurny had the most applause at 10.
  • Sergio Majluf had the most responses at 26.
  • Kang-ting had the most emotional responses at 18.
  • Talya Stein Rochlin had the most emotional responses if you exclude applause at 14.
  • Sergio Majluf racked up the most eloquence points with 3 “Well saids!”
  • Talya Stein Rochlin had the most written commetns with 15 and the most laughs at 3.

Below are roll-ups of responses across the 36 presenters categorized by type.

  • Cognitive: Yellow
  • Analytical: Green
  • Emotional: Pink

Below is a forest-for-the-trees view of all responses. Same color-coding applies.

Forest-for-the-Trees view of  responses.

Forest-for-the-Trees view of responses.

Interaction Issues

I made a few design changes for thesis and fixed a number of interaction issues within the first few hours of testing:

  • Reduced the overall number of response tags and made them more straightforward. e.g. Huh. which has always been the short-form of “Food for thought…” became Interesting!
  • Replaced the 3rd-person version of the tags (xxx is pondering) with the 1st-person version: Interesting! because after the last test session, I felt a long list of the 3rd-person responses felt a bit wooden.
  • Added a { clap! } tag for applauding.
  • Made the “nametag” field more discoverable by linking it to clicking on the roster (list of people watching). Probably giving yourself a username should be an overlay that covers the entire page so people don’t have an opportunity to miss it.
  • As responses came in, they fill up in the “Theirs” column below the video. Once there were more comments than would fit in your window viewport, you wouldn’t see new comments unless you explicitly scrolled. We explicitly decided not to auto-scroll the list of responses for static video to automatically keep in time with the video because we thought it would be too distracting. For streaming however, auto-scroll would have just been one less thing for you to have to do while you’re trying to keep apace of the video and thinking about how to comment.

Other issues didn’t become apparent until after it was all over…

  • People didn’t see how to “change camera view.” The default view was a pretty tight shot of the presenter. Really the view you want is a wider shot that includes the presentation materials being projected on the wall behind the speaker.
  • The last test session helped me realize that for video the elaboration text field should stay open between submits. But really, it should probably start off open as it’s too small to be something you “discover” on your own while the video is going.
  • The star button which is meant to allow you to mark points of interest in the video without having to select a tag was never used. I’m not sure how useful it is without the ability to write something.

Solutions

The obvious first step is to go in and squish the remaining interaction issues enumerated above. But there are more systemic problems that need to be addressed.

Problem:

  • People wanted to say “hey” when they logged on. The “live” nature of the experience means social lubrication is more important than annotating text or video on your own. ITP Thesis is also a special case because the people annotating not only know the presenters personally but are likely sitting in the same room (as opposed to watching online.) One person said they were less likely to speak their mind on Ponder if they had something critical to say about a presentation.
  • There is also general trepidation over attaching a comment to the wrong point in the timecode. One person who is also familiar with the Ponder text response interface described the problem as “I don’t know what I’m annotating. With text, I know exactly what I’m selecting. With video, I’m doing it blind.”

Solution: Chatroom Layout

Replace the 2-column design in favor of a unified “chatroom” window that encourages more casual chatter. If the timecoding feels incidental (at this point in the presentation, someone happen to say such-and-such) then you worry less about attaching your annotation to the precisely correct moment.

Problem: Too many tags.

The sentiment tags got in the way of commenting. There were simply too many to choose from. They knew what they wanted to write, but the step of selecting a tag slowed them down. This was true of static video for those watching STEM instructional videos as well.

Solution: Reduce and Reorder

  • Slim down the tag choices, in effect trading fidelity of data (we’ll lose a lot of nuance in what we can aggregate) for lowering the bar for engagement. There should probably be something like 3, but alas I can’t figure out how to chuck out 2 of the following 5.
    • Question?!
    • Interesting
    • Are you sure about that?
    • HAHA
    • { clap ! }
  • Reorder the workflow so that you can write first and then assign tags after, not dissimilar to how people use hashtags in Twitter.

This rearrangement of steps would turn the live video workflow into the exact inversion of the text annotation experience, which is tag first then elaborate for very specific reasons that have more or less worked out as theorized.

Conclusion

The modest amount of data we gathered from this years’ thesis presentations was enough to validate the original motivation behind the experiment: Collect audience reactions in a way that can yield meaningful analysis of what happened. However there remains a lot of trial-by-error to be done to figure out the right social dynamics and interaction affordances to improve engagement. There are clear “next steps” to try and that is all you can every ask for.

The only tricky part is finding another venue for testing out the next iteration. If you have video (live or static) and warm bodies (students, friends or people you’ve bribed) and are interested in collaborating with us, get in touch! We are always on the look out for new use cases and scenarios.

Video Timeline

New modes of interaction for Flip videos Part 3

This past semester I’ve been experimenting with new modes of interaction for video. I’ve written about 2 previous test sessions here and here.

Annotating video is hard. Video is sound, imagery moving through time. It’s an immersive and some might say brain-short-circuiting medium. Watching 3 videos simultaneously may be the norm today. However, if you’re truly engaged in watching video content, in particular content that is chock full of new and complex ideas, it’s hard to do much else.

Watching video content makes our brains go bonkers.

“’Every possible visual area is just going nuts,’ she adds. What does this mean? It shows that the human brain is anything but inactive when it’s watching television. Instead, a multitude of different cortexes and lobes are lighting up and working with each other…”

“She” is Joy Hirsch, Dir. of fMRI Research at Columbia U, being cited by the National Cable & Communications Association who interpret her results to mean watching tv is good for our brains, like Sudoku. I’m not sure about that, but it’s reasonable to conclude that consuming video content occupies quite a lot of our brain.

Of course no one is saying reading doesn’t engage the brain. However, one key difference between text and video makes all the difference when it comes to annotation: With reading, we control the pace of reading, slowing down and speeding up constantly as we scale difficult passages or breeze through easy ones.

Video runs away from us on its own schedule whether or not we can keep up. Sure we can pause and play, fast-forward and slow down, but our ability to regulate video playback can only be clunky when compared to the dexterity with which we can control the pace of reading.

In fact the way researchers describe brain activity while watching tv sounds a lot like trying to keep up with a speeding train. All areas of the brain light up just to keep up with the action.

So what does that mean for those of us building video annotation tools?

Video annotation has all the same cognitive challenges of text annotation, but it comes with additional physiological hurdles as well.

STEM v. The Humanities

I’ve been working off the assumption that responding to STEM material is fundamentally different from The Humanities. For STEM subjects, the range of relevant responses is much more limited. It essentially amounts to different flavors of “I’m confused.” and “I’m not confused.”

I’m confused because:

  • e.g. I need to see more examples to understand this.
  • Syntax! I don’t know the meaning of this word.
  • How? I need this broken down step-by-step.
  • Why? I want to know why this is so.
  • Scale. I need a point of comparison to understand the significance of this.

I get this because:

  • Apt! Thank you. This is a great example.
  • Got it! This was a really clear explanation.

Humor is a commonly wielded weapon in the arsenal of good teaching so being able to chuckle in response to the material is relevant as well.

But as is often the case when trying to define heuristics, it’s more complicated than simply STEM versus not-STEM.

Perhaps a more helpful demarcation of territory would be to speak in terms of the manner and tone of the content (text or video) and more or less ignore subject matter altogether. In other words: The way in which I respond to material depends on how the material is talking to me.

For example, the manner and tone with which the speaker addresses the viewer varies dramatically depending on whether the video is a:

  •  “How-to” tutorial,
  • Expository Lecture
  • Editorializing Opinion
  • Edu-tainment

The tutorial giver is explaining how to get from A to Z by following the intervening steps B through Y. First you do this, then you do that.

The lecturer is a combination of explanatory and provocative. This is how you do this, but here’s some food for thought to get you thinking about why that’s so.

The editorializing opinion-giver is trying to persuade you of a particular viewpoint.

Edu-tainment is well, exactly that. Delivering interesting information in an entertaining format.

And of course, the boundaries between these categories are sometimes blurry. For example, is this Richard Feynman lecture Expository Lecture? or Editorializing Opinion?

I would argue it falls somewhere in the middle. He’s offering a world view, not just statements of fact. You might say that the best lecturers are always operating in this gray area between fact and opinion.

The Test Session

So in our 3rd test session, unlike the previous 2, I chose 3 very different types of video content to test.

Documentary on The Stanford Prison Guard Experiment (Category: Edu-tainment)

A 10-minute segment of the Biden v. Ryan 2012 Vice Presidential Debate re: Medicare starting at ~32:00. (Category: Editorializing Opinion)

Dan Shiffman’s Introduction to Inheritance from Nature of Code (Category: Expository Lecture)

You can try annotating these videos on Ponder yourself:

  1. Dan Shiffman’s Introduction to Inheritance from Nature of Code.
  2. Biden v. Ryan Vice-Presidential Debate.
  3. The Stanford Prison Experiment documentary.

The Set-up

There were 5 test subjects, watching 3 different videos embedded in the Ponder video annotation interface in the same room, each on their own laptop with headphones. That means unlike previous test sessions, each person was able to control the video on their own.

Each video was ~10 minutes long. The prompt was to watch and annotate with the intention of summarizing the salient points of the video.

2 students watched Dan Shiffman’s Nature of Code (NOC) video. 2 students watched the documentary on the Stanford Prison Experiment. And 1 student watched the debate.

The Results

The Stanford Prison Experiment had the most annotations: 15/user versus 12 for NOC and 5 for the debate, and the most varied use of annotations: 22 total versus 5 for NOC and 4 for the debate.

Unsurprisingly the prison documentary provoked a lot of emotional reactions (50% of the responses were emotional – 12 different kinds compared to 0 emotional reactions to the debate).

Again unsurprisingly, the most common response to the NOC lecture was “{ chuckle },” it was 12 of the 25 responses. There was only 1 point of confusion around, a matter of unfamiliarity with syntax: “What is extends?”

This was a pattern I noted in the previous sessions where in many STEM subjects, everything makes perfect sense in the “lecture.” The problem is oftentimes as soon as you try to do it on your own, confusion sets in.

I don’t think there’s any way around this problem other than to bake “problem sets” into video lectures and allow the points of confusion to bubble up through “active trying” rather than “passive listening.”

Intro to Inheritance - NOC Intro to Inheritance – NOCBiden v. Ryan Vice-Presidential Debate Biden v. Ryan Vice-Presidential Debate Stanford Prison Experiment Stanford Prison Experiment

Less is More?

There are 2 annotation modes in Ponder. 1 displays a small set of annotation tags (9) in a Hollywood Squares arrangement. A second displays a much larger set of tags. Again the documentary watchers were the only ones to dive into the 2nd set of more nuanced tags.

Less v. More Less v. More

However, neither student watching the documentary made use of the text elaboration field (they didn’t see it until the end) where you can write a response in addition to applying a tag whereas the Nature of Code and Biden-Ryan debate watchers did. This made me wonder how having the elaboration field as an option changes the rate and character of the responses.

Everyone reported pausing the video more than they normally would in order to annotate. Much of the pausing and starting simply had to do with the clunkiness of applying your annotation to the right moment in time on the timeline.

It’s all in the prompt.

As with any assignment, designing an effective prompt is half the battle.

When I tested without software, the prompt I used was: Raise your hand if something’s confusing. Raise your hand if something is especially clear.

This time, the prompt was: Annotate to summarize.

In retrospect, summarization is a lot harder than simply noting when you’re confused versus when you’re interested.

Summarization is a forest-for-the-trees kind of exercise. You can’t really know moment-to-moment as you watch a video what the salient points are going to be. You need to consume the whole thing, reflect on it, perhaps re-watch parts or all of it and construct a coherent narrative out of what you took in.

By contrast, noting what’s confusing and what’s interesting is decision-making you can do “in real-time” as you watch.

When I asked people for a summarization of their video, no one was prepared to give one (inspite of the exercise) and I understand why.

However, one of the subjects who watched the Stanford Prison Experiment documentary was able to pinpoint the exact sentence uttered by one of the interviewees that he felt summed up the whole thing.

Is Social Always Better?

All 3 tests I’ve conducted were done together, sitting in a classroom. At Ponder, we’ve been discussing the idea of working with schools to set up structured flip study periods. It would be interesting to study the effect of socialization on flip. Do students pay closer attention to the material in a study hall environment versus studying alone at home?

The version of Ponder video we used for the test session shows other users’ activity on the same video in real-time. As you watch and annotate, you see other people’s annotations popping up on the timeline.

For the 2 people watching the Stanford documentary, that sense of watching with someone else was fun and engaging. They both reported being spurred on to explore the annotation tags when they saw the other person using a new one. (e.g. “Appreciates perspicacity? Where’s that one?”)

By contrast, for the 2 people trying to digest Shiffman’s lecture, the real-time feedback was distracting.

I assigned an annotation exercise to another test subject to be done on her own time. The set-up was less social both in the sense that she was not sitting in a room with other people watching and annotating videos and she was also not annotating the video with anyone else via the software.

I gave the same prompt. Interestingly, from the way she described it, she approached the task much like a personal note-taking exercise. She also watched Shiffman’s Nature of Code video. For her, assigning predefined annotation tags got in the way of note-taking.

Interaction Learnings

  • The big challenge with video (and audio) is that they are a black box content-wise. As a result, the mechanism that works so well for text (simply tagging an excerpt of text with a predefined response tag) does less well on video where the artifact (an annotation tag attached to timecode) is not so compelling. So I increased emphasis on the elaboration field, keeping it open at all times to encourage people to write more.
  • On the other hand, the forest-for-the-trees view offered on the video timeline is I think more interesting to look at than the underline heatmap visualization for text so I’ll be looking for ways to build on that

    Timeline

    Timeline

  • There was unanimous desire to be able to drag the timecode tick marks after they had already submitted a response. We implemented that right away.
  • There was also universal desire to be able to attach a response to a span of time (as opposed to a single moment in time). The interaction for this is tricky, so we’ve punted this feature for now.
  • One user requested an interaction feature we had implemented but removed after light testing because we weren’t sure if it would prove to be more confusing than convenient: automatically stopping the video whenever you made mouse gestures indicating you’re intending on creating an annotation and then restarting the video as soon as you finished submitting. I’m still not sure what to do about this, but it supports the idea that the difficulty of pacing video consumption makes annotating and responding to it more onerous than doing the same with text.

Takeaways

  1. Annotating video is hard to do so any interaction affordance to make it easier helps.
  2. Dense material (e.g. Shiffman’s lecture) is more challenging to annotate. Primary sources (e.g. the debate) are also challenging to annotate. The more carefully produced and pre-digested the material (e.g. the documentary), the easier it is to annotate.
  3. With video, we should be encouraging more writing (text elaborations of response tags) to give people more of a view into the content.
  4. Real-time interaction with other users is not always desirable. Users should be given a way to turn it on/off for different situations.
  5. There may be a benefit to setting up “study halls” (virtual or physical) for consuming flip content, but this is mere intuition right now and needs to be tested further.

Last but not least, thank you to everyone at ITP who participated in these informal test session this semester and Shawn Van Every and Dan Shiffman for your interest and support.

Logging Tutorial

New modes of interaction for Flip videos Pt. 2

This semester in addition to teaching, I am a SIR (Something-in-Residence) at ITP, NYU-Tisch’s art/design and technology program.

My mission for the next 3 months is to experiment with new modes of interaction for video: both for the flip video and live events. (Two very different fish!)

User Study No. 2

I recently wrote about my first User Study with students from Dan Shiffman’s Nature of Code class. A couple of weeks ago, 3 students from Shawn Van Every’s class, Always On, Always Connected volunteered to watch 4 videos. In this class, students design and build new applications which leverage the built-in sensor and media capabilities of mobile devices.

The Setup

Again, there were no computers involved. We screened the videos movie-style in a classroom. Instead of asking for 2 modes of interaction (Yes that was super clear! versus Help!), I asked for a single mode of feedback: “Raise your hand when you’ve come across something you want to return to.”

The 4 videos introduce students to developing for the Android OS. It’s important to note that simply by virtue of working in the Android development environment (as opposed to Processing) the material covered in these videos is much more difficult than what was covered in the Nature of Code videos the previous week. However, the students’ level of programming experience is about the same.

What happened…

Video 1: Introduction to logging

Zero hands. But from the looks on people’s faces, it was clear it was not because everyone was ready to move on. A few things to note:

  1. For screencasts, watching a video movie-style is extra not-helpful as it is difficult to read the text.
  2. The video begins promptly after a 20s intro. With instructional videos, the emphasis is always on brevity (the shorter the better!) In this case however, I wonder if 20s isn’t enough to allow you to settle down and settle into the business of trying to wrap your head around new and alien concepts. I’m sure #1 made it harder to focus as well.
  3. Reading code you didn’t write is always challenging and I think even more so on video where we’re much more tuned into action (what’s changing) and much less likely to notice or parse what’s static.
  4. Unlike before, when I asked after-the-fact, “Where in the video do you want to go back to?” the students were unable to respond. Instead the unanimous response was, “Let’s watch the entire video again.” This is where collecting passive data about the number of times any given second of a video is watched by the same person would be helpful.
  5. In general, questions had to do with backstory. The individual steps of how to log messages to the console were clear. What was missed was what and why. First, what am I looking at? And second, why would I ever want to log messages to the console? I say “missed” and not “missing” because the answer to those questions were in the videos. But for whatever reason, they were not fully absorbed.
  6. Last but not least, I have to imagine that this watching as a group and raising your hands business feels forced if not downright embarrassing.

Hopefully, a software interface for doing the same thing will elicit more free-flowing responses from students as it will provide them with a way to ask questions “in private.”

Videos 2-4

Everyone was more settled in after watching the logging video. Each subsequent video built on the momentum accumulated in the previous one. With Video 2, I started to get some hand-raising starting with video 2. when we returned to those points in the video, people were very specific about what was confusing them.

“Why did you do that there?” or it’s converse, “Why didn’t you do that there?” was a common type of question as was “What is the significance of that syntax?”

Another way to look at it is: There were never any issues with the “How.” The steps were clearly communicated and video is a great medium for explaining how. The questions almost always had to do with “Why?”, which makes me wonder if this is the particular challenge of video as a medium for instruction.

Does learning “Why?” require a conversation initiated by you asking your particular formulation of “Why?”

Video 2: Toasts

Video 3: Lifecycle of an Android App Part 1

  • @2.00: What is the significance of @-id? Why aren’t you using the strings file for this?
  • @6:50: Why did you change arg0 to clickedView?

Other syntax questions included:

  • What’s the difference between protected versus public functions?
  • What’s extend and implement?
  • What’s super()?
  • What’s @override

All of these syntax questions pointed to a much larger / deeper rabbit hole having to do with object-oriented programming and encapsulation, a quick way to derail any getting started tutorial.

In general though, I don’t think these syntax questions prevented you from understanding the point of the videos which were about creating pop-up messages (Video 2) and the lifecycle of Android apps (Videos 3 and 4) (when do they pause versus stop, re-initialize versus preserve state).

Video 4: Lifecycle of an Android App Part 2

In Video 4, there were a lot of nodding heads during a section that shows the difference between pausing/resuming and killing/creating an Android app. The demonstration starts around @2:20 and goes for a full minute until @3:23. Epic in online video terms. It’s followed by a walk-through of what’s happening in the code and then reinforced again through a perusal of Android documentation @3:50 where an otherwise impenetrable event-flow diagram is now much more understandable.

It’s also important to note that both the demo which starts at @2:20 and the documentation overview @3:50 are preceded by “downtime” of trying, failing and debugging starting the Android emulator and navigating in the browser to find the flow diagram.

In general there’s a lot of showing in these videos. Each concept that’s explained is also demonstrated. This segment however was particularly lengthy and in accordance with the law of “It-Takes-A-While-To-Understand-Even-What-We’re-Talking-About” (#2 above), my completely unverified interpretation of events is that the length of the demonstration (far from being boring) helped everyone sink their teeth into the material.

What’s Next and Learnings for Design

As we head into testing actual software interfaces, this 2nd session gave me more concrete takeaways for workflow.

  1. You lay down “bookmarks” on the timeline as you watch to mark points you would like to return to.
  2. You lay down “bookmarks” on the timeline after you’ve watched the video to signal to the instructor what you would like to review in class the next time you meet.
  3. You can expand upon “bookmarks” with an actual question.
  4. You select from a set of question tags that hopefully help you formulate your question. (More to come on this.)

While it’s important to break down the videos into digestible morsels, it’s also important to make it easy to watch a few in a row as the first one is always going to be the most painful one to settle into. There are ways to encourage serial watching with user interface design. (e.g. playlist module, next video button, preview and auto-play the next video.)  But perhaps something can be done on the content side as well by ending each video on a so-called “cliffhanger.”

 

Hey, Teachers: We’re Listening!

Teachers created Ponder. Sure, our engineers, design team, and business interests inform the product. But the feedback we’ve gotten from teachers since before Ponder was born have turned it into the powerful tool it is today. Exciting events like EdSurge’s Tech for Schools Summit in Silicon Valley confirm for us that we’re on the right track. They also, fortunately, offer us an opportunity to get more feedback from teachers to inform our future growth. EdSurge posted the feedback we received there, and we wanted to take this opportunity to respond to this invaluable information.

Firefox, Chrome and iOS

First, we heard from some of you that you weren’t sure whether Ponder would work on your school’s platforms. Guess what? It will! So long as you have Chrome or Firefox browsers on your computers or you have iPads, you’ll be good to go.

Since November, we’ve actually already addressed some of the suggestions these teachers brought to our attention. Most notably, we’ve increased the types of resources on which students can Ponder, now including video and ePubs, which we hope will be helpful to Danielle and others interested in incorporating a variety of resources into their curriculum. We also built “elaborations” into the Ponder box, addressing Gabe’s request to allow students to comment on their reading in a more open-ended way.

Click the pencil to write

Click the pencil to elaborate!

Gabe had another great suggestion: build Ponder to work with more languages. We’re working on it! Some of our beloved bilingual teachers have been hard at work helping us translate the sentiments into other languages. We expect to be piloting Ponder in Spanish this semester, in fact.

Ponder Sentimientos

Ponder Sentimientos!

Other suggestions, such as from a 4th grade ELL teacher, suggested making Ponder more developmentally appropriate for elementary students. Ever since this summer when we had conversations with teachers at Lonnie B. Nelson elementary school in South Carolina this past summer, we’ve been working to make Ponder a powerful critical reading tool for younger readers. This is an area where we especially could use the help of seasoned experts who know these learners better than anyone.

Feedback from teachers motivated us to build Ponder to work with video, eBooks, and in other languages, as well as to add features like elaborations. We asked teachers for guidance throughout the process, and we are now piloting each feature in order to evolve them further. And as much as we love our teachers, we could always use more help! If you are especially interested in video, eBooks, or foreign language, or you just want to play a key role in developing a new technology, we’d love to hear from you. Without teachers like you, we wouldn’t be where we are today.

 

Learnings from the Classroom: Visualizing Reactions on Reading Assignments

Recently I wrote about the lessons we’re learning from our first K12 pilots this semester.

Our biggest challenge thus far has been adapting Ponder, which was originally designed around self-directed reading scenarios, to assigned reading.

Whereas a really active self-directed article might provoke a dozen or so responses…assigned reading can generate 1-200 responses from a class of 20 students.

This can easily overwhelm both the feed and the article page itself. In my last post I wrote about how we’re starting to ameliorate the issues in the feed.

Color Coded Sentiments

Red Light, Green Light, Yellow Light: React, Evaluate, Comprehend.

We’ve also recently shipped a change to the browser add-on to provide teachers and students with a forest (as opposed to the trees) view of student responses.

Those of you using Ponder might have noticed that our Sentiment tags in the Ponder response box are color coded.

We’re now using those colors on the article page itself, so you can see at a glance, where students are responding emotionally, where they’re having comprehension issues and where they’re exercising judgement.

Yellow are responses having to do with basic comprehension or incomprehension as the case may be of the reading:

What does this mean? I’d like examples. I need a break down.

Green are responses that pass judgement through evaluation:

This is hyperbole, oversimplification, insight!

Red are responses that express some kind of emotional reaction:

Disapproval, regret, admiration.

The tick marks are on the right give you a sense of the activity level across the entire reading, be it a one-page article or a 100-page essay.

Visualizing Sentiments By Type

Visualizing Sentiments By Type

It’s a small step, but it’s the kind of thing we want to do more of to help teachers get a quick sense of how the class responded as a whole to the reading.

 

 

 

 

Learnings from the Classroom: The difference between self-directed and assigned reading.

We’ve been iterating on and refining Ponder in the higher ed classroom for two years now and it’s been really interesting to compare that experience to the past two months of watching our K12 classes get going. (Early on, we hit an IT-related snag at the WHEELS Academy. Now we’re getting to the good stuff that has to do with how students are actually using Ponder to do close reading and how a teacher might use it to evaluate their students.

In many respects the K12 classroom is much more demanding than higher ed, though both present the challenge to us of figuring out:

How to make Ponder work for both self-directed *and* assigned reading.

What are the key differences?

One of the features that’s worked out really well for self-directed reading is that unlike most social media feeds which are built around individuals, the Ponder Feed rolls up student responses by article. That means in the feed, you quickly get a sense of where the conversations are happening even if students happen upon the same article independently.

However with assigned readings where even short two page articles can generate over a hundred student responses, rolling up responses by article is just disorienting and overwhelming and fails to provide teachers with a quick way to evaluate each student’s understanding of the reading.

3 classes in particular really helped us understand the problem better: Mr. V’s 9th grade Global Studies class at Stuyvesant H.S., Ms. Perez’s 8th grade English class at xxx in Chicago and Tom Lynch‘s graduate-level Curriculum Development and Instruction Planning with Technology class at the Pace University School of Education.

We knew this was going to be a problem but it wasn’t clear to us how best to address this issue quickly until the first assigned reading responses began to roll in…

As a quick fix we re-collated assigned reading responses around the student. It’s an improvement on what we had before. But it’s not entirely clear this is the best solution. We’ve gained clarity around how each student responded to the text. But we’ve lost the thread of conversation, how are students responding to each other.

The path to supporting assigned reading well is going to be a steep and rocky one, but we know the only way to negotiate it is through trial and error and paying close attention to what’s going on in our classrooms.

Identifying Teaching Moments at the NYC DOE Shark Tank

On Friday we were invited to present at NYC DOE’s Teacher Shark Tank, one event in a series where three edtech startups get 30 minutes each to present and answer questions from DOE teachers.

The Teacher Shark Tank is hosted by iZone, NYC DOE’s Office of Innovation…which supports schools in personalizing learning to accelerate college and career readiness among our students.

Ponder is running in many schools across the country this semester, but in our hometown of New York, we are in one NYC DOE school (Stuyvesant H.S.), as well as one NYC Charter (WHEELS) and one NYC private school ( Trinity School). This was our first opportunity to formally present to DOE educators at a DOE-organized event, so we were excited to be there!

Other presenters included Quill, who has figured out a way to blend learning grammar into an interactive reading experience and Fast Fig, a word processor for math that enables teachers to cleanly and easily create equations and graphs online – a long sought after solution with many applications!

We had a late start, but this didn’t deter the great group of interested and engaged teachers who are clearly the vanguard of technology users at their schools (City as School, High School of Telecommunications Arts and Technology and P.S. 64 the Robert Simon School)

We wanted to impress this audience in particular. Fortunately, over the past two years of watching classes use Ponder (first graduate business classes then undergraduate philosophy classes then 12th grade English classes and 9th grade global studies classes and now 2nd grade ELA classes!!) we’ve evolved how we present and explain Ponder.

In our presentation Friday, Ben and I focused on one key concept: the speed at which a teacher can review student micro reading responses. How fast can a teacher review Ponder micro-reading responses you ask? Real fast. Fast enough that teachers can encourage their students to make as many responses as they’d like, knowing they will have time to grade them all and provide meaningful feedback. In fact, our conceit (which has proven true in higher ed and is starting to prove itself in K12 as well) is that not only will the instructor be able to review everyone’s responses, they’ll be able to do so *before* class starts, and actually use their students responses as the basis for in-class discussion.

To prove my point, Ben and I put up four different Ponder micro-reading responses from a single 8th grade class in the Chicago Public School system and asked the teachers in the room how quickly they could assess each one.

Number 1: A solid response.

No. 1 Coherent and appropriate.

No. 1 Coherent and appropriate.

The excerpt that the student chose is coherent, though it’s not making a particularly controversial or insightful point. The sentiment s/he applied is appropriate though not particularly nuanced (I empathize.) nor does it exhibit a deeper insight or independent thinking.

 

Number 2: Exemplary!

No. 2 Real insight and independent thinking!

No. 2 Real insight and independent thinking!

The excerpt is coherent and interesting, making a surprising, counter-intuitive argument.The sentiment applied is spot on, demonstrating the student clearly understands the author is making a claim and now needs to substantiate it with supporting evidence.

Number 3: Red Flag!

No. 3 Incoherent and inappropriate.

No. 3 Incoherent and inappropriate.

The selection itself is incoherent. And the sentiment is clearly inappropriate. Either the student is completely lost and doesn’t understand the point of the assignment or is simply not trying at all.

Number 4: A Teaching Moment.

No. 4 What is there to agree about?

No. 4 What is there to agree about?

This is where things start to get interesting. This is an opportunity for what would pedagogically referred to as a “teaching moment,” an invitation for further discussion in class. First of all, the selection itself is interesting. The author describes an interaction that is clearly intended to provoke some sort of emotional reaction from the reader. However, the student chose to agree with it – not the reaction the author probably intended! So, why did you concur? What are you agreeing with? What is the idea that you thought emerged from this quote? Or, perhaps, you’ve identified a moment in which the student wasn’t reading very carefully at all, which is valuable in and of itself.

We maintain a long list of ideas on how to better support this process of evaluating reading responses. It changes week to week as we watch our K12 classes settle into how to use Ponder while discovering new uses for it as well.

Still, I think we’ve reached an important milestone in delivering on the promise of providing a way for students to “practice critical reading” while giving teachers a way to respond to and build on that practice.

And, let it not go without saying, we are lucky to have such thoughtful students and teachers using Ponder that we can so easily find a mountain of interesting responses!

EdSurge Silicon Valley Summit: Gameify the Conversation

For better or worse, the Valley has become a rather glam place, in an Iron Chef! kind of way.

The grumpy old man in me is often wistful for the days when credibility was measured in the age and wear of your company t-shirts (and when everyone got the same tent-cut XXL Hanes t-shirt regardless of the size and shape of the wearer).

EdSurge's Tony Wan, Making things happen

EdSurge’s Tony Wan, making it look easy!

I’m happy to report that this past Saturday the get-your-hands-dirty pragmatism I fell in love with when I first arrived in the Valley was out in full force in Mountain View, CA, albeit with better fitting t-shirts. The t-shirts were green and they read: “Keep Calm and Read EdSurge.”

A few weeks ago, I wrote about a courageous first effort at enabling real educator-edtech conversations in Rhode Island. There were problems, potentially event-killing catastrophes, but by thinking on their feet, EdSurgers, Highland Instituters and EdTechRI-ers adapted and iterated “in real-time” and pulled off a successful event. There were clear lessons coming out of that first event, and I was excited to see how “agile” EdSurge could be with how they ran events.

This was the basic setup…

Ready to Ponder at 8AM.

Ready to Ponder @8AM

  • A generous room of large round tables, one for each company, each surrounded by chairs, loosely grouped by product focus.
  • Power strips for each table with tape for the inevitable cable undergrowth.
  • Surprisingly solid wifi, especially given the attendance
  • Efficient Lightening talks in a separate, yet adjacent auditorium

These structural changes got the whole thing moving…

  • No parallel workshops!!! More on this in a moment.
  • A team of easily identifiable green-shirted people assigned to specific companies to match-make educators with technologies.

One final stroke of genius put it over the top…

A raffle for teachers, with real prizes (a bunch of Surfaces, iPads, Chromebooks, Lynda.com subscriptions, no keychains, no squeezyballs). But this was no fundraising raffle. The way teachers obtained raffle tickets was by providing feedback to the companies. Let me just repeat that so you can fully absorb what a great idea it was. (I’ll make the font size really big too.)

The way teachers obtained raffle tickets was by providing feedback to the companies.

Technically the way it worked was I was supposed to give them 1 ticket for filling out a short 1-2 minute survey at my table, and then they could get three tickets by filling out a longer survey at the main EdSurge table-Island in the center of the room.

In reality, EdSurge had successfully game-ified cross-discipline conversation. Once the game was clear, everyone knew how to play.

Gone was the awkward mingling, the Who’s-going-to-say-something-first? interactions. Everyone had a non-stop flow of teachers coming up to them with a clear agenda: “Tell me about your product. I teach ____ to grades ____ and I want to hear how you think it could be relevant to me and my students.”

Really, no workshops?

One of the concerns I had as the event planning was coming together was the ballsy decision to not have any PD-type workshops at the summit. I had a secret fear that without that incentive many teachers would not make the trek to come to the conference at all. My fear was all for nought. I’m curious what the final EdSurge numbers will be, but I spoke with well over 100 educators in a constant stream from 9AM to 4PM with just one break for the lunch panel. I’m not sure how to extrapolate that over the 30 vendors, but the place was packed and filled with energetic voices and enthusiastic discussion.

My admittedly self-serving theory is that by putting a stake in the ground and saying:

“This event is dedicated solely to bridging the communication gap between educators and edtech so ‘Conversation’ is going to be its sole activity.”

EdSurge showed everyone involved that they really mean it. And in my experience teachers are some of the best Do-you-mean-it? detectors out there. Still, I think that took guts and it paid off – “Conversation” was center stage the whole day.

The kids will be alright.
There’s just one last thing I need to sing the praises of and then I promise I’ll get more critical. The lunch panel featured 10 kids from roughly 1st through 12th grade. It’s a “stunt” which I have seen attempted at various other events and I call it a “stunt” because that’s how it often comes off. EdSurge however managed to pull it off. The moderator Chris Fitz Walsh from Zaption did a great job of asking questions, and the kids they picked provided real insights beyond the usual “I like to tweet so schools should tweet too.” Judge for yourself:

“My teacher has a little microphone clipped to her shirt so everyone can hear her clearly even when her voice is tired.”

Huh, I would never have thought of that.

So, no room for improvement?

Just to shore up my plummeting credibility, here’s a list of complaints:

  • There were so many teachers that it became unrealistic for them to both talk to me and then fill out the 1-2 minute survey at my table in exchange for more raffle tickets, so I simply started handing them out to whomever I spoke with. I like the idea of eliciting more frank, quantifiable feedback through the survey. But there simply wasn’t enough room for teachers to fill out the survey without creating a bottleneck for conversations. Perhaps if the raffle ticket was attached to the survey itself, then I wouldn’t need to play “enforcer” for whether a teacher “earned” a raffle ticket. The problem with that would be people filling out the survey without even coming to the table…maybe there isn’t a solution.
  • We were almost shouting over one another to be heard. (I know a good problem to have!) It was definitely a function of attendance levels, but paying attention to the acoustics of the room might have helped manage noise levels. Still a noisy room is an energetic room, so in the end, it probably did more good than harm.
  • I didn’t have a great way of keeping track of all the teachers and schools who came by to talk other than giving them hand-outs or having them scribble emails on a sheet. Because we’re still in pilot-mode for K12, we’re trying to be extra hands-on with our teachers and I was worried the whole time I wouldn’t be able to follow-up with all the people I was talking to. This seems like a very solveable problem though.

So what’s next?

Now that EdSurge has a template for this kind of event, I’m hearing whispers that lots of other cities around the country are requesting their own version, and my guess is that EdSurge will deliver. But what about new templates to address other gaps in edtech? Creating a “conversation space” where educators and technologists can talk is just a first step.

Chatting afterwards with Idit Harel Caperton from Globaloria and EdSurge’s Mary Jo Madda, Idit suggested a principal/administrator/budget-decision-maker focused event, which at the very least this edtech startup would find incredibly useful.

We are a few weeks from sending a survey out to our K12 pilot schools about our pricing plans. We’re still struggling with the conundrum of: Teachers love the idea of using Ponder, but rarely have any personal budget to pay for it at a price that would sustain the service longer-term. I’m sure we’re not alone.

Unlike professors in higher-ed, K12 teachers lack personal agency to purchase tools. Yet, they more than principals and administrators know which tools would actually be useful. I also suspect edtech companies need to address structural issues to be convincing to budget-decision-makers:

  • We need a tidy answer to the question: Does it work? In the form of objective efficacy validation! We’re working on it now!
  • We need tried and true best practices to help manage the risks of trying new technologies in classrooms that have little room for wasting time.
  • We need to reward teachers and schools for taking risks and being open to experiment. From our perspective, we learn far more from failures (e.g. the technology made no difference) than unmitigated success.

I know, this should really be a separate post.

I’ll just wrap up and say: Thank you EdSurge, your hard work and attention to detail showed. We’re not just reading, we’re looking forward to more, and following your lead!