Ideas Never Sleep: Powered by the Ponder API

Back in May we announced the release of our V2 API, but at the time couldn’t tell you about the exciting progress our early partners were making. Now we can. Powered by Ponder

The WR Berkley Innovation Lab at New York University’s Stern School of Business has launched Ideas Never Sleep, a community built on fresh ideas from academics in their community and beyond, and it’s powered by our V2 API.

INS publishes a steady stream of well produced videos of provocative conversations with thought leaders on a range of social, political and economic issues that start you pondering. Luckily, our API provides a way for readers and watchers to articulate that pondering quickly and thoughtfully. Their implementation also demonstrates the flexibility that the API provides in terms of integrating annotation and discussion into each partner’s unique look and feel.

So far they have integrated the video response interface, visible to the right of the Youtube embed in the screenshot below, with 8 sentiments and an elaboration box. Below the video, you can see the response timeline, with tick marks indicating the points in the video to which users have commented. One user’s comment is selected and the sentiment and elaboration are visible.Of course you don’t have to squint at the screenshot, you can see this particular video piece (and others) live!

Congratulate them on the launch, the great design, check it out and join the conversation!

INS Cryptocurrency 101

INS Cryptocurrency 101 by Professor David Yermack

 

 

 

Cognitive-Analytical-Emotional Heat map

Ponder Platform API V2 Release

I’m excited to announce the public launch of the Ponder API V2! We soft-launched the first set of platform APIs about a year ago, and learned a lot from our initial consumers.

Cognitive-Analytical-Emotional Heat map

Embed the Ponder cognitive analytical emotional heat map on your content!

V2  is more robust and brings many enhancements to our white- and gray-label integration scenarios, and we are making it publicly available. At a high level it supports:

  • Account Creation & Authentication (SSO)
  • User & Group Administration
  • Retrieving Activity Data
  • “Native” User Interactions (without the browser add-on)
Video Cognitive-Analytical-Emotional Heat map

Ponder’s cognitive analytical emotional heat map works on video too.

These methods are designed around scenarios where partners layer the Ponder micro-response interface and heat map on top of their content (text and video,) extending their infrastructure to incorporate flexible, structured and thoughtful content-driven interactions between their users.

Of course, if you’re interested in integrating Ponder into your service, get in touch.

Along the way, we spent a bunch of time investigating various API documentation tools, and fell in love with Speca.io, so we wanted to do a shout out to them for making a great tool. A few great features:

xkcd: API

(Courtesy xkcd)

  • Paste in your JSON blobs and it will automatically parse them into documentation; you just add descriptions and notes.
  • Embed and reference any data element in your docs elsewhere in your docs – no more updating changes in multiple places!
  • Their editing tool is nice because unlike in some other doc tools, you’re not just editing one gigantic YAML file.

We use Postman as our API explorer which is also very handy.

 

 

 

Map: Irish Ponder Pilot Schools

Announcing Ponder for Irish (Gaeilge) – a COGG-funded Pilot Program

COGG Logo

We are proud to announce that Ponder is the recipient of a generous grant from An Chomhairle um Oideachas Gaeltachta & Gaelscolaíochta (COGG) to adapt Ponder for Gaeilge, the Irish language. The grant covers the costs of the language work and a professional development workshop to kick-off a pilot in schools across Ireland in preparation for broad availability in Irish-language classrooms and reading groups.

More than ten schools have registered their interest so far, and we will be running a day-long workshop in early December. Interested Gaelscoileanna and English-medium schools should contact us by selecting Webinar Request in this ticket form. We began running online webinars to provide additional background and answer questions and will be running more in the coming weeks.

 

Map: Irish Ponder Pilot Schools

Irish Ponder Pilot Schools

More about Ponder

Ponder is a higher-order literacy tool, designed to support students in their critiquing and evaluating of texts and videos. It does this through a flexible and fun social media platform intertwined with the learning experience. There is an introductory video on our site, and a series of interviews with current Ponder teachers discussing teaching challenges and the ways they have implemented Ponder.

Ponder is already used across the US, and this year together with Anna Davitt (Hibernia College) and Fiona Nic Fhionnlaoich (NUIM-Froebel), we will be collaborating with teachers from Irish schools to develop a version of Ponder for Gaeilge.

Language teachers are familiar with the challenge of fully immersing students in a language. Reading casually in the language and chatting with friends are important parts of building and maintaining fluency. Outside of assigned homework, Ponder supports these activities by creating a pedagogically-sound social media environment for students to practice Gaeilge.

Adapting Ponder for a new language is both a linguistic and a cultural translation process and is always fascinating.

Pilot Details

Beyond the opportunity to shape Ponder for Gaeilge, participating schools will receive:

  • A free year-long site license of Ponder for all of their teachers and students
  • A travel stipend for one teacher to attend a day-long workshop on using Ponder
  • ICT Implementation support

The Ponder workshop will include:

  • An overview of Ponder and common implementation strategies.
  • Small group brainstorming of lesson ideas by subject area.
  • 1:1 hands-on setup of classes, using materials teachers bring with them to the session.
  • Small group live Ponder lesson amongst attendees.
  • Growing and evolving the draft set of Gaeilge sentiments as a group.

Buail an iarann te” and Contact us for more information!

Ponder for your favorite language

Ponder Gaeilge

“Tá mé ag smaoineamh faoi seo.” Ponder Irish sentiments.

Behind the scenes here at Ponder, we have been slowly expanding our language coverage in collaboration with enthusiastic Ponder educators! Beyond the work on Gaeilge, we now support Español and عربي, and have a sentiment set in progress for Rwandan.

Adapting Ponder for a new language is both a linguistic and cultural translation process – you can’t create a slang-infused critical thinking scaffold without a lot of head-scratching and word play. One component is identifying and incorporating relevant idioms and proverbs to provide a more fluent and poetic discussion experience. It’s a collaboration with native-speaking educators, and requires classroom time to get the gather the feedback necessary to get the tone of individual sentiments correct as well as getting the distance between different sentiments correct.

If you’d like to work with us to create a sentiment set in another language, let us know!

 

Actionable Data: Survey Responses from EdSurge Baltimore

Companies love conducting surveys. Most of them are pretty awful to the point of uselessness.

So it was a nice surprise to find a few gems amongst the questions asked in the survey from the latest EdSurge event in Baltimore,

3 that I liked in particular:

1. If administrators were looking to purchase this product for their school, how strongly would you advocate for this product?

The question was not just another: “Rate this product on a scale of 1 to 5,” a purely theoretical exercise that’s meaningless in comparison to the much more concrete task of trying to imagine how far you’d stick your neck out to try a new product.

2. Forget about the current price of the product (if it’s free, forget about that too). If you were given an extra $100 to spend per user (e.g. student, teacher, etc.) how much would you be willing to spend for this product?

A more confusing alternative might have been “How much would you pay for this product?” Instead, teachers are given a hypothetical that gets rid of issues that often confuse the question of pricing (e.g. variations in teachers’ discretionary budgets, how much was granted and how much they have remaining, whether a product could be gotten for free) and focuses on the teacher’s sense of a fair price. This question replaced my previous favorite: “How much would you expect to pay for a tool like this?” which Desmos founder Eli Luberoff shared a few months ago.

Last but not least, I want to mention this question not so much because it’s particularly well-worded, but because I rarely hear it discussed:

How often would you use this product?

Room for Improvement?

Now that I’ve gotten the praise out of the way, here are 3 specific ideas on how the surveys could be even better.

1. Enable companies to respond to feedback through a craigslist-style-mail-relay mechanism which would enable direct communication while keeping teacher identities anonymous.

2. Provide teachers more of an incentive for quality (of feedback) over quantity (of responses) by allowing each company to enter one survey response in a “Most Useful Feedback” raffle.

3. Don’t allow teachers to respond to surveys before the event starts! I had a teacher show up before the start of the Baltimore Summit who told me she had already looked at Ponder online and decided it was a terrible idea, but if I wanted I could try to convince her otherwise. So I spent ten minutes walking her through our product, after which she said “Well, this is pretty great actually. I’m going to recommend it to the teachers at my school – you should put all of what you said to me on to the web site.” Which is fine, since we definitely need to improve the story on our site, and I was glad she was excited. Then she said “I already filled out the long form survey yesterday, so can you give me the tickets?” I gave her the tickets, and as she walked away realized that we would now have whatever her initial reaction was captured in our public survey results, with no mechanism for her to update them or comment on her misunderstandings.

4. Some sort of tech-savvy-ness question for the teachers to provide some context for their answers. How often do you use tech in your class today? Have you tried other tools similar to this one in purpose and funcationality?

An even bigger idea: The EdSurge Census

Here’s more of a project-sized proposal for EdSurge: A survey of educators, schools and infrastructure – The EdSurge Census. A grassroots, lay of the land sort of thing, which could be independent of the summits. I think it would be valuable to ed-tech companies, the schools themselves as well as funders and foundations. In the spirit of the new “50 States Project,” but a bit more data-oriented.

A few obvious questions we’d love answers to readily come to mind:

Devices – What schools have, what they think they’ll have, what they wish they’d had, etc; and

eBooks – In our experience most K12 schools seem to be lost at sea when it comes to ebook platforms – they don’t like being locked in, it complicates their device story, they have physical books which are easy to manage, etc.

Adoption – What tools are teachers using today and how often are they using them? Tools should be categorized by function (e.g. Administration and Logistics, Planning and Instruction) subject matter and grade-level so it’s easy to see where the gaps are.

EdSurge could charge companies to fund the data collection and reporting. Or perhaps one of the edu heavy weights would be willing to step up up to fund it.

A statistically accurate portrait of the state of ed-tech in schools is probably unlikely to emerge from such a survey. But so long as the results were positioned honestly as an informal sampling, the results would be undeniably useful.

Whatever happens, I’m looking forward to seeing the next incarnation of the Edsurge surveys in Nashville!

Implementing Flip: Why higher-order literacy is not just about text

Within the field of “instructional” EdTech, Ponder is often described as a “literacy” tool, which while accurate, encompasses a much broader spread of pedagogical challenges. We usually describe our focus as “higher-order” literacy – the ability to extract meaning and think critically about information sources.

A couple months ago we began our pilots of Ponder Video, bringing our patent-pending experience to the medium more often associated with the flipped classroom. From our experience with text over the past two and a half years, we knew this would be an iterative process, and as expected we are learning a lot from the pilots and the experimentation – see part 1 and part 2 of our interface studies.

During this process, some people have asked if Ponder Video is, in startup terminology, a “pivot”; a change of strategy and focus of our organization. The question: do we still consider Ponder a literacy tool?

After a bit of reflection the answer is a resounding YES! And the process of reflection helped us gain a deeper appreciation for what “literacy” actually means. This is not a change of strategy, it is an expansion of Ponder to match the true breadth of literacy.

Literacy > Text

The term literacy is most often thought of as the ability to decode words and sentences. That is, of course, the first level of literacy, but there is a shifting focus in many of the new pedagogical and assessment debates, from the Common Core to the SAT, a shift away from memorizing facts and vocabulary towards students developing a higher-order literacy. Still, higher-order literacy is a vague concept, and at Ponder we are always searching for ways of articulating our vision more clearly.

One line I like, from a now deprecated page of ProLiteracy.org with no by-line, does a really nice job of concisely capturing the significance of a broad definition of literacy: “literacy is necessary for an individual to understand information that is out of context, whether written or verbal.”

The definition is so simple, you might miss its significance. So let me repeat it:

“literacy is necessary for an individual to understand information that is out of context, whether written or verbal”

I like it because “understand information” goes beyond mere sentence decoding, and “out of context” unassumingly captures the purpose of literacy – to communicate beyond our immediate surroundings. The “or verbal” I would interpret broadly to include the many forms that information comes in today – audio, video and graphical representations.

The 21st century, at least so far and for the foreseeable future, is the interconnected century, the communication century, the manipulated statistics century, the Photoshopped century, perhaps the misinformation/disinformation century, and I would posit that if there is one “21st century skill” that we can all agree on, it is literacy, in the broad sense:

Understanding information out of context.

A text or video is inherently out of context, so a student at home is not only one step removed from the creator of the content, but also removed from the classroom. So a question immediately jumps to mind:

Are your students ready to learn out of context?

The answer to this question varies dramatically, and is not easily delineated by grade level; defining that readiness to provide an appropriate scaffold requires care, and is something we have worked to understand empirically through student activity in Ponder.

The National Center for Education Statistics, part of the US Department of Education, has put a lot of effort into defining and measuring this skill, and have twice performed a survey they call the National Assessment of Adult Literacy (NAAL), providing a useful jumping off point for thinking about your students.

This is not like one of those surveys you read about. It is a uniquely thorough survey that consists of a background questionaire and screening process, followed by an interview.

The NAAL is made up 100% of open-ended, short-answer responsesnot multiple choice – and focuses on the participants ability to apply what they have read to accomplish “everyday literacy goals”. You read something, then answer a question that depends on something you have to have extracted from the reading.

As you might imagine, this is not a quick process.

Administering the NAAL takes 89 minutes per person and in 2003 was administered to 18,000 adults sampled to represent the US population. That’s almost 26,700 person-hours or three person-years of interviewing.

This thoroughness is important given that they are trying to measure a broad definition of literacy.

The NAAL breaks literacy into three literacy skill types:

  • Prose
  • Document
  • Quantitative

You can read the details on their site, but given that it turns out American adults have roughly comparable prose and document literacy scores, I would lump them together under a general heading of “reading.” Examples of quantitative literacy tasks are reading math word problems, balancing a checkbook or calculating interest on a loan.

They delineate four literacy levels:

  • Below basic
  • Basic
  • Intermediate
  • Proficient

Again, they go into a lot of detail mapping scores on to these names, but I think what’s most useful are the “key abilities” that distinguish each level in their definitions.

My interest in higher-order literacy immediately takes my eye to the key distinction between “Basic” and Intermediate. An intermediate skill level means the individual is capable of:

“reading and understanding moderately dense, less commonplace prose texts as well as summarizing, making simple inferences, determining cause and effect, and recognizing the author’s purpose”
NAAL Overview of Literacy Levels

That list of skills captures the starting point of what we think of as higher order literacy. (If you’re curious, the highest level of literacy, modestly labeled “Proficient,” seems to mostly be distinguished by the ability to this sort of analysis across multiple documents.)

For me, the NAAL provides a useful framework for breaking down the literacy problems that instructional techniques (and technologies) are trying to address.

Ponder supports teachers who are trying to move their students from a level of basic literacy to being able to make inferences, determine cause and effect, recognize the author’s purpose.

…but our goal is to go an important step beyond even the NAAL’s definition of literacy.

Because what is the point really of making inferences and identifying cause and effect if ultimately you are unable to probe with your own questions and evaluate with your own conclusions?

In the end, the end-game of literacy is the so-called ability to “think for yourself.”

Flip is a great way to practice literacy. But you need literate students to flip.

The flipped classroom model is typically used for students to dig into and prepare for class discussion, and obviously presumes a basic student literacy level. But passively consuming a video or skimming a text isn’t enough to drive discussion back in class. As we all know from our own student days, technically meeting the requirements of having “done the reading” does not comprehension make.

Flipping, more so than traditional classroom lectures, requires students to be able to dig beneath the surface of the content, question its credibility, ask clarifying questions and make their own inferences.

Such are the makings of a classic Chicken and Egg conundrum. Flipping requires students to have the skills they are still trying to learn and master through…flipping.

I don’t think anyone has claimed to have answered this question yet, and neither have we, but the first step is realizing what you don’t know, and we do claim to have done that! We will continue to share the learnings from our video research as we iterate on Ponder Video, and welcome more ideas and discussion from teachers everywhere.

Population by Prose Literacy Level (Courtesy NAAL)

Population by Prose Literacy Level (Courtesy NAAL)

Curious about the numbers? The NAAL has been run twice – once in 1993, and a second time in 2003, and there wasn’t a big change in the scores in those ten years, except a slight increase in quantitative literacy. However, we have a pretty serious higher-order literacy problem. Between 34% and 43% of adult Americans lack the higher order literacy skills to be classified as “intermediate” or above by the NAAL.

EdSurge Silicon Valley Summit: Gameify the Conversation

For better or worse, the Valley has become a rather glam place, in an Iron Chef! kind of way.

The grumpy old man in me is often wistful for the days when credibility was measured in the age and wear of your company t-shirts (and when everyone got the same tent-cut XXL Hanes t-shirt regardless of the size and shape of the wearer).

EdSurge's Tony Wan, Making things happen

EdSurge’s Tony Wan, making it look easy!

I’m happy to report that this past Saturday the get-your-hands-dirty pragmatism I fell in love with when I first arrived in the Valley was out in full force in Mountain View, CA, albeit with better fitting t-shirts. The t-shirts were green and they read: “Keep Calm and Read EdSurge.”

A few weeks ago, I wrote about a courageous first effort at enabling real educator-edtech conversations in Rhode Island. There were problems, potentially event-killing catastrophes, but by thinking on their feet, EdSurgers, Highland Instituters and EdTechRI-ers adapted and iterated “in real-time” and pulled off a successful event. There were clear lessons coming out of that first event, and I was excited to see how “agile” EdSurge could be with how they ran events.

This was the basic setup…

Ready to Ponder at 8AM.

Ready to Ponder @8AM

  • A generous room of large round tables, one for each company, each surrounded by chairs, loosely grouped by product focus.
  • Power strips for each table with tape for the inevitable cable undergrowth.
  • Surprisingly solid wifi, especially given the attendance
  • Efficient Lightening talks in a separate, yet adjacent auditorium

These structural changes got the whole thing moving…

  • No parallel workshops!!! More on this in a moment.
  • A team of easily identifiable green-shirted people assigned to specific companies to match-make educators with technologies.

One final stroke of genius put it over the top…

A raffle for teachers, with real prizes (a bunch of Surfaces, iPads, Chromebooks, Lynda.com subscriptions, no keychains, no squeezyballs). But this was no fundraising raffle. The way teachers obtained raffle tickets was by providing feedback to the companies. Let me just repeat that so you can fully absorb what a great idea it was. (I’ll make the font size really big too.)

The way teachers obtained raffle tickets was by providing feedback to the companies.

Technically the way it worked was I was supposed to give them 1 ticket for filling out a short 1-2 minute survey at my table, and then they could get three tickets by filling out a longer survey at the main EdSurge table-Island in the center of the room.

In reality, EdSurge had successfully game-ified cross-discipline conversation. Once the game was clear, everyone knew how to play.

Gone was the awkward mingling, the Who’s-going-to-say-something-first? interactions. Everyone had a non-stop flow of teachers coming up to them with a clear agenda: “Tell me about your product. I teach ____ to grades ____ and I want to hear how you think it could be relevant to me and my students.”

Really, no workshops?

One of the concerns I had as the event planning was coming together was the ballsy decision to not have any PD-type workshops at the summit. I had a secret fear that without that incentive many teachers would not make the trek to come to the conference at all. My fear was all for nought. I’m curious what the final EdSurge numbers will be, but I spoke with well over 100 educators in a constant stream from 9AM to 4PM with just one break for the lunch panel. I’m not sure how to extrapolate that over the 30 vendors, but the place was packed and filled with energetic voices and enthusiastic discussion.

My admittedly self-serving theory is that by putting a stake in the ground and saying:

“This event is dedicated solely to bridging the communication gap between educators and edtech so ‘Conversation’ is going to be its sole activity.”

EdSurge showed everyone involved that they really mean it. And in my experience teachers are some of the best Do-you-mean-it? detectors out there. Still, I think that took guts and it paid off – “Conversation” was center stage the whole day.

The kids will be alright.
There’s just one last thing I need to sing the praises of and then I promise I’ll get more critical. The lunch panel featured 10 kids from roughly 1st through 12th grade. It’s a “stunt” which I have seen attempted at various other events and I call it a “stunt” because that’s how it often comes off. EdSurge however managed to pull it off. The moderator Chris Fitz Walsh from Zaption did a great job of asking questions, and the kids they picked provided real insights beyond the usual “I like to tweet so schools should tweet too.” Judge for yourself:

“My teacher has a little microphone clipped to her shirt so everyone can hear her clearly even when her voice is tired.”

Huh, I would never have thought of that.

So, no room for improvement?

Just to shore up my plummeting credibility, here’s a list of complaints:

  • There were so many teachers that it became unrealistic for them to both talk to me and then fill out the 1-2 minute survey at my table in exchange for more raffle tickets, so I simply started handing them out to whomever I spoke with. I like the idea of eliciting more frank, quantifiable feedback through the survey. But there simply wasn’t enough room for teachers to fill out the survey without creating a bottleneck for conversations. Perhaps if the raffle ticket was attached to the survey itself, then I wouldn’t need to play “enforcer” for whether a teacher “earned” a raffle ticket. The problem with that would be people filling out the survey without even coming to the table…maybe there isn’t a solution.
  • We were almost shouting over one another to be heard. (I know a good problem to have!) It was definitely a function of attendance levels, but paying attention to the acoustics of the room might have helped manage noise levels. Still a noisy room is an energetic room, so in the end, it probably did more good than harm.
  • I didn’t have a great way of keeping track of all the teachers and schools who came by to talk other than giving them hand-outs or having them scribble emails on a sheet. Because we’re still in pilot-mode for K12, we’re trying to be extra hands-on with our teachers and I was worried the whole time I wouldn’t be able to follow-up with all the people I was talking to. This seems like a very solveable problem though.

So what’s next?

Now that EdSurge has a template for this kind of event, I’m hearing whispers that lots of other cities around the country are requesting their own version, and my guess is that EdSurge will deliver. But what about new templates to address other gaps in edtech? Creating a “conversation space” where educators and technologists can talk is just a first step.

Chatting afterwards with Idit Harel Caperton from Globaloria and EdSurge’s Mary Jo Madda, Idit suggested a principal/administrator/budget-decision-maker focused event, which at the very least this edtech startup would find incredibly useful.

We are a few weeks from sending a survey out to our K12 pilot schools about our pricing plans. We’re still struggling with the conundrum of: Teachers love the idea of using Ponder, but rarely have any personal budget to pay for it at a price that would sustain the service longer-term. I’m sure we’re not alone.

Unlike professors in higher-ed, K12 teachers lack personal agency to purchase tools. Yet, they more than principals and administrators know which tools would actually be useful. I also suspect edtech companies need to address structural issues to be convincing to budget-decision-makers:

  • We need a tidy answer to the question: Does it work? In the form of objective efficacy validation! We’re working on it now!
  • We need tried and true best practices to help manage the risks of trying new technologies in classrooms that have little room for wasting time.
  • We need to reward teachers and schools for taking risks and being open to experiment. From our perspective, we learn far more from failures (e.g. the technology made no difference) than unmitigated success.

I know, this should really be a separate post.

I’ll just wrap up and say: Thank you EdSurge, your hard work and attention to detail showed. We’re not just reading, we’re looking forward to more, and following your lead!

The Risk of Data Monopolies in Education

In EdTech, we take for granted the drive for efficiency and innovation generally associated with private industry. Occasional hiccups are often dismissed as the short-term cost of longer term innovation. The recent surge of investment has escaped being lumped in with still warm education privatization failures.

Everyone working in education right now agrees that the primary goal of EdTech as a business is to improve educational outcomes; profit is secondary, a means to that end and not an end in itself. Despite these good intentions, the devil is in the details, and a key challenge is the lack of consensus on how to measure student, teacher and school performance.

Recently I wrote about the idea of standardizing the certification of educational technologies, and on Tuesday, September 17th, an audience question came up during the NYT Schools for Tomorrow conference that raised what may be a key next discussion in this debate:

Who will have access to all of the educational data being captured through online learning services?

At Ponder, we have debated this issue at great length. Our answer? Something we call Fair Trade Data. But I’ll save that for a later blog post.

xkcd on Monopoly

Time to think about monopoly in education data (courtesy of xkcd,a/>)

Instead, let’s examine that question, which was posed at the end of the “Gamechangers” panel, moderated by Ethan Bronner from the New York Times.

Here are the panelists, from left to right, with the bios provided by the conference:

Daphne Koller, co-founder, Coursera
Michael Horn, co-founder, The Clayton Christenen Institute for Disruptive Innovation
Alec Ross, senior advisor on innovation and former senior advisor to Secretary Hillary Clinton at the U.S. State Department
Paula Singer, C.E.O. Global Products and Services, Laureate Education

At about 40:20 in the video, you will hear a question from Lee Zia of the National Science Foundation.

“One of the promises on the Big Data side and connecting [sic] directing to learning is the opportunity to share that data. And I worry a little bit that there aren’t any incentives for the various aggregators and collectors to do that sharing. So what is the Federal role if there is one, to try to move us in that direction, not to set the standards, but to get the community together to form those standards, so you might even see something like a national weather service, where the weather data is free, but commercial entities can compete on the services they provide on that data.”

The moderator, Ethan Bronner, passes this question to Alec Ross “…because he works for the Federal Government…”

[~41:10 in the video] Alex Ross responds that (paraphrased) ‘the federal government should keep out of it’; we need to let the market work on the “supply-chain” for delivering education, where those that “build the broadest and strongest partnerships will win the standards war.”

I can’t be sure what Lee Zia was looking for from the panelists, but I think it was more than ‘Should the federal government create data standards?’ And Alec Ross’ answer felt dismissive. It’s also possible that I simply wish he had asked the question that I wanted to ask:

Will all this amazing education data, often being generated by private entities, go the way of the giant troves of insurance, credit card and internet data, that are kept out of public hands in the name of trade secrets and competitive advantage?

Contrary to Alec Ross’ statement, I think the federal government should get involved, if only to steer the industry in the right direction. Ownership of this data is a pressing concern for everyone involved: students, teachers, schools, parents, policy makers, investors and technologists.

The NYT conference was laced with generalizations about the masses of priceless data being captured that would soon revolutionize learning. While the value of that data is still to be proven in education, I think we would all agree that

  • We want students to be able to switch freely between educational providers (something that is already a challenge in offline education);
  • We need education providers (public and private) to provide their data openly to outside parents, evaluators, auditors, researchers and the like to validate their claims;

I hope, though I’m less sure, that we could all agree that

  • We want companies to compete on how they make use of student data, not on merely having the largest stockpile of it.

So, where is that data, right now?

To pick on the conference panelists, the data is on servers at:

While it is early days yet in terms of the accumulation of truly “Big” education data, I’m not aware of the general availability of the data from any of these services to the public.

We need to start a serious national conversation about educational data, and set expectations on transparency before too many investment dollars get put behind ventures whose perceived value may rest significantly on the proprietary data they will accumulate. We ought to be strategic about managing the control of this data before it’s too late.

Certifying EdTech for Educational Efficacy: An FDA for EdTech

Today I want to share some thoughts we’ve had recently about measuring educational efficacy. The goal of the EdTech industry is to harness technological advances that have sped many other industries forward to improve educational outcomes. This requires collaboration between education practitioners and technology practitioners; a sharing of their separate expertise, in a complex, iterative back and forth.

An FDA-approved blackboard

An FDA-approved blackboard. (Einstein’s blackboard By decltype [CC-BY-SA-3.0])

Proof Needed!

As you may know, Ponder is a critical reading tool designed to scaffold the individual critical reading process as well as support a collaborative, social group work experience. Our guiding hypothesis is that we can help students improve these skills with a service that enhances the reading they are doing for class. Ponder scaffolds their critical thinking and facilitates collaboration around those readings. The fact that the service performs those functions doesn’t necessarily mean it achieves the intended goals; only a carefully designed scientific study will prove that.

Proof is needed for any educational tool.

We all must prove that the student’s learning experience is improved by the addition of tool X into their classroom. Now, agreeing on what “improving the student’s learning experience” means exactly or how to measure it is outside the scope of this blog post. However, I think we can all agree that when a school is considering purchasing a new tool, they need to know that it does what it says it does (e.g. makes a positive academic impact). Currently there is too much marketing mumbo-jumbo flying around, and the districts don’t have the expertise or the resources to certify companies’ claims. That certification needs to be derived from a test. Considering how many unproven tools there are, we need a good testing and certification process urgently. (On a side note, because it’s easier to measure, it seems that logistical concerns around tech integration often trump educational impact when it comes to comparing technologies. Will this tool work with this other tool my school uses? This is not an unreasonable question, but it should not be confused with educational effectiveness.) So, how do we create a good testing and certification process?

Testing and Certification   

There are two big challenges with running studies in K12:

  1. Minimizing learning disruptions to the current students who are part of the test.
  2. Maintaining the objectivity of the organizations and individuals involved in administering the study.

First, testing a tool inevitably brings with it the risk of disrupting learning. While to a certain extent that is unavoidable, schools need help managing that risk so that the testing process doesn’t do more damage than good. Without testing, no one can find out what works; without a testing facilitator to take responsibility for controlling chaos, schools will rightfully remain reluctant to try new technologies. As a school, I would be skeptical of any company’s claims at minimizing disruption in my classrooms. After all, they are likely more concerned about running their study than possibly causing trouble for the students in the study.

Second, administering a controlled study requires disinterested, skilled professionals to design the study and then evaluate the results. It requires time from school personnel and may require new infrastructure to support the study. Those requirements cost money. Schools should also be skeptical of a study that was paid for by the company behind the product being tested. Even well-intentioned people running a test evaluating a product they have a financial interest in will have difficulty being objective. Not so well-intentioned people may use money to influence the outcome of the study. To address these two challenges, I think there needs to be an objective third-party whose ultimate success is measured in student performance improvements.

We Need an Objective Third-party

Let’s call that entity an EdTech “broker”. It’s a bit like having an FDA for EdTech. The US Food and Drug Administration (FDA) provides an essential objectivity to the success of many food and drug industries and a similar provider is essential to the success of EdTech.

This broker will create a level, merit-driven playing field that focuses corporate education investments on impact rather than marketing.

So the next question is, who covers the costs of the broker? At first blush one might think it should be the companies. After all, the study is on their product, they are potentially wasting the school’s time, and they stand to make money if it is proven effective. I don’t think that will work. As a society if we have decided to apply technological innovation to improve the free education that is provided to all children in the US, we should invest money in creating the political, technical and managerial infrastructure to facilitate the evaluation of these educational technologies.

Some core tenets of such an entity:

  • Standard evaluation processes
  • Transparency of funding sources
  • Shared governance by educational and technological stakeholders
  • Politically non-partisan

Some process ideas:

  • Low processing fee to filter out spurious test applications but not so high as to make venture backing a pre-requisite
  • Guidelines around maximum participation for a given student to minimize disruptions
  • Streamlined re-certification process for technology updates
  • Non-financial benefits (access to additional infrastructure?) to participating schools to encourage participation
  • Publicly available study data for public learnings
  • Low-friction certification verification process (and/or policing of certification claims)
  • Marketing/distribution opportunities for certified

Some ideas for who should be involved:

Who or what are we missing? We’d love to hear your thoughts!