kentbye's picture

Iterative Media: Treating Collaborative Media Like Open Source Code

| | |

The second day of the Beyond Broadcast Conference split into smaller working groups, and I attended the "Iterative Media: Treating Media like Open Source Code" session with about 30 other people.

The idea was to draw parallels between open source software development and the trend towards interactive and participatory media.

The Echo Chamber Project has been very much influenced by the open source production model, especially after watching the Revolution OS documentary about the free and open source movements -- as well as reading Eric S. Raymond's The Cathedral and the Bazaar.

Here is some of the discussion that came out of session as to how it relates to The Echo Chamber Project:


kentbye's picture

Technology Audio: Kent Bye's Collaborative Filmmaking Presentation at Ritual Roasters, San Francisco

| | |

Listen to a presentation that I gave at Ritual Roasters coffee shop in San Francisco's Mission District on January 3rd, 2006. (Length: 25:28)

Chris Messina announced this meet-up with this blog post and submitted this announcment to Upcoming.org, and there were about a dozen people who showed up to listen to this brief presentation. Jennifer Myronuk recorded the talk, and I was interviewed by Geek Entertainment TV in a piece that should be airing sometime in the future. Update 4/10/06: Here's the episode.

Here are some blog reactions from Andy Kaufman & Irina Slutsky and some photos from Tara Hunt and EFF's Jason Schultz.

This is a more general overview of the collaborative filmmaking schematic, and I'd recommend listening to the presentation while looking at the flowchart shown below.

You can also listen to a similar presentation that I gave at the Open Media Developers Summit in early January here.

kentbye's picture

Technology Audio: Kent Bye's Collaborative Filmmaking Presentation at Open Media Developers Summit

| | |

Listen to a presentation that I gave at the Open Media Developers Summit on October 21st, 2005. (Length: 25:01)

This is a fairly technical overview of the collaborative filmmaking schematic, and I'd highly recommend listening to the presentation while looking at the flowchart shown below.

You can also listen to a similar presentation that I gave in San Francisco in early January here.

kentbye's picture

Some Pioneering Efforts in Independent Film Distribution

| | | |

Now that it is so cheap to produce and distribute your own multimedia material, the value added provided by distribution companies is not as much as it used to be. The Internet has shattered the previous barriers for distributing video and information around the world, which has created an information explosion. And as Herbert Simon says, "a wealth of information creates a poverty of attention." So the mainstream media companies and movie distributers are now competing with individual bloggers, podcasters and videobloggers for the attention of audiences.

And so instead of pre-filtering gatekeepers deciding what will and will not be published, now anyone can publish anything and it is up to post-filtering systems like Amazon's recommendation systems or word of mouth that is built up from a network and community of followers.

Below are a few pointers to how this environment is changing the field of film distribution...

kentbye's picture

Dynamically Creating Sound Bite Sequences with SMIL & Drupal

| | | | |

VICTORY! I am now able to dynamically generate audio metadata and have it be recognized by Quicktime as an edited sound bite sequence! This is a HUGE breakthrough for my collaborative editing schema. Here is a demo of a sequence of three sound bites that have been excerpted from longer audio files and strung together.

I've found a way for Drupal to automatically edit sound bite sequences without having to generate any text files or generate muxed audio files that need to be written to the server.

UPDATE 3/29/06 This URL has the dynamically generated SMIL code (i.e. take a look at the source code for the page to see the SMILtext). And then here is an embedded version of this SMIL metadata:

More details below...

kentbye's picture

Defining Sound Bite Edges for Collaborative Editing

| | |

As I've described before in my collaborative editing schema, my plan is to use the playlist mechanism to have users help filter through the interview audio content to help identify good sound bites. But who takes the takes the first cut of defining the boundaries of the "sound bites"? Well, I'll be taking a first crack at defining the sound bite edges, but the volunteers will be able to redefine the "In" and "Out" points.

I was chatting with Kevin Marks yesterday -- who is a former Apple employee, podcasting technology pioneer and Technorati engineer -- and Marks made the great point that there are three stages to editing: "shot-logging, sequencing and polishing."

Later in the conversation, Marks split the first shot-logging into two separate sections, and this is how he described the post-production process of editing a massive data set (with spelling errors corrected):

one is labeling times of interest, which is naturally sloppy -- let people tag as it goes by

second is making good sound bite clips, or chunks of meaning -- which involves a bit of effort to pick good in and out points -- and appeals to a subset of people

3rd is sequencing clips, which once you have good standalone ones defined by meaningful chunk rather than time, is easy -- and gives you a much better granularity to tag, annotate, vote on and so on

the difficulty is bridging the stages

Indeed, there is a lot of difficulty and complexity in bridging these stages. And so that's why I've planned on doing the shot-logging and clean-up process myself so that I can distribute the sequencing portion.

I'll be doing the shot-logging offline using the Final Cut Pro blade tool to select sound bites IN and OUT times in the timeline, and then export this IN and OUT times via Final Cut Pro XML. Then the text will be aligned with timecode data for each sound bite and then uploaded as a Drupal node with a unique URL, which will allow volunteers to then annotate the sound bites will tags and comments. Volunteers will also be able to shorten or lengthen each sound bite, and so there may need to be ways to account for the metadata associated with sound bites that a high variance of IN/OUT edges. Or perhaps the variance will be negligible considering that the context and meaning of the sound bite will be relatively the same.

Then volunteers help with the sequencing stage through web browser-based "editing" using the playlist mechanism. Marks makes a distinction between "editing" and "sequencing":

The key is to distinguish editing and sequencing -- editing needs sample accuracy -- and you aren't going to get that with XML and intermediates without a lot of pain.
sequencing of self-contained chunks without attempting laps or dissolves

So these "sequenced" sound bites will be done within playlists by volunteers, and then I'll be exporting the timecode data from this "edited" sequence back into Final Cut Pro so that I can "polish the edits" offline.

In an ideal situation, I would distribute the first editing phase of sound bite parsing to a large set of eager volunteers who would listen to over 45 hours of footage. They would highlight an interesting audio segment, and then tag it and annotate it on the fly as Marks suggests.

The mechanism to do this online could be accomplished with something like the BBC's Annotatable Audio Project as described by Tom Coates -- but it's still behind the firewall of the BBC. I plan on talking with Coates about it in more details soon, and maybe he'll give me some more insights into how the BBC will be normalizing or making sense of this fuzzy data set. But either way, I probably won't have the resources or technological mechanisms to be able to effectively distribute this task.

So I'll be the one who will be taking a first cut of determining "In" and "Out" points of the sound bites. This is certainly a huge bottleneck that could eventually be overcome by integrating something like the Annotatable Audio tool into the workflow. But doing it myself is a satisfactory workaround for the moment considering that volunteers will still be able to either shorten or lengthen the edges of the sound bites. And also considering that I'm ultimately interested in the distributed sequencing portion of the editing process.

I'm still on the lookout for PHP coding help in making this happen, so please e-mail me at kent@kentbye.com if you're interested in helping out.

Below is the full transcription of the IRC chat that I had yesterday about Kevin Marks with more commentary on this issue.

I was specifically searching for a way to automatically create smaller sections of MP3s by entering in a set of IN and OUT times. Marks says that it's theoretically possible to automate this task in QuickTime or Final Cut Pro as well as with iTunes, but it's way too complex for me to figure out, and so I'm sticking with the SMIL solution for the moment...

kentbye's picture

Volunteers Needed: Calling All PHP Coders

| | | |

I'm starting to specify specific feature requests for Drupal's playlist module, both Colin and Farsheed said that I should coordinate these requests through this Drupal feature request database.

If you have any PHP coding experience and are interested in helping expand this playlist module for other nodes and for collaborative film editing, then please e-mail me at kent@kentbye.com -- and I'll be sure to put you in touch with Colin and Farsheed. Or you can go through Drupal's tasking mechanism if you'd like, but drop me a line to keep me posted.

kentbye's picture

SMIL Demos: Paving the Way for Collaborative Audio Editing

| | | | |

This is an explanation for how to edit together sound bite excerpts from longer MP3 files using something called SMIL -- or "Synchronized Multimedia Integration Language."

I've completed some successful experiments with SMIL and Quicktime that provide a promising solution for collaborative editing. A browser-based editing system could use the playlist mechanism to create sequences of sound bites. I discuss this more in these conversations with Lucas Gonze, Colin Brumelle and Farsheed -- and in this blog post: Playlists are to Music as Edit Decision Lists are to Film.

I'm passing along this information along so that some developers can add SMIL export functionality to the Drupal playlist module.

What does all of this mean?
I could upload the audio from the 45+ hours of interviews that I've conducted for this project, and then combine this SMIL mechanism with Drupal so that volunteers could start helping edit the film. This Collaborative Filmmaking schamatic has more details.

These volunteer edits would be dynamically generated online with SMIL, and other people could listen to them and rate them. The good edits could be translated into real offline edits via the IN and OUT times being exported through Final Cut Pro XML generated by Drupal.

SMIL is a pretty simple mark-up language similar to HTML that allows the creation of audio and video edit decision lists.

You can create a small text file that points to the IN and OUT times of audio or video source files, and then this SMIL file can then be played with Quicktime or Realplayer. It is a simple way to edit audio and video together using text mark-up language, which could easily be automatically generated from a playlist of sound clips.

Below are more details for using SMIL for dynamic editing of audio and video content...

kentbye's picture

Playlists are to Music as Edit Decision Lists are to Film

| | | | | | |

When the timelines of edited film sequences are exported, then they are flattened into an "Edit Decision List" that is somewhat analogous to a musical playlist and an academic syllabus or H20 playlist.

UPDATE: I explore the how this playlist concept can be applied to filmmaking in conversations with Lucas Gonze, Colin Brumelle and Farsheed

JD Lasica just posted a video interview with Molly Krause of Harvard's H20 playlist project.

You can think of H20 as a way to share a college class syllabus. It's an ordered reading list that can be used to aggregate knowledge from experts. They describe it as an "open source, educational platform that explores powerful ways to connect professors, students, and researchers online."

Here's an example of a H20 reading list that should give you an introduction to "Social Bookmarking with Del.icio.us" written by Brian Del Vecchio.

H20 tracks derivatives made from playlists as a way to track the relative authority, expertise and reputation of a given author -- much in the same way that academic citations in peer review journals are a way to measure these same metrics. But the H20 playlist format decentralizes this process from the normal gatekeepers and allows for a much more grassroots and bottom-up approach to this concept.

So as Krause says in the interview, you can think of these playlists as a way to provide guided maps to particular fields of study.

My understanding is that playlists have gained a lot of popularity because it is a way for people to create sequences of songs to play on their computer or mobile devices. Because more and more individual songs are being digitally distributed and separated by the order in which they usually play on an entire music album, then playlists have been able to recreate these musical experiences much in the same way that DJs have done.

So Harvard has expanded this playlist concept from music to academic information, and I would like to expand it even further to a journalistic and filmmaking context.

Netflix is already using the playlist concept for distribution of DVDs with their "Netflix Queue." You select videos that you want to see, and then you determine the order in which you receive them.

This can be extended to the actual generation of films because filmmakers are essentially doing the same thing except with multiple video and audio dimensions synchronized by timelines and smaller nuggets of information (i.e. a sound bite vs. an entire DVD).

When the timelines of edited film sequences are exported, then they are flattened into an "Edit Decision List" that is analogous to a musical playlist and an academic syllabus or H20 playlist.

Edit Decision Lists can be generated with a web browser interface, and then dynamically translated into online edits by using the SMIL open standards -- or into offline edits by using Final Cut Pro XML interface that I've described before. I've been able to successfully accopmlish both of these in the tests that I've done.

Most people get completely lost by this point, but I'm basically exploring the idea of using playlists for the collaborative generation of media much in the same way that Harvard is exploring playlists for the collaborative distribution of knowledge.

I was very happy to discover that H20 backend has been open sourced, however the code was a bit too complex for me to parse.

But I'd love to catalyze an effort to port some of these concepts from H20, and into Drupal.

I've been in contact with the two Drupal developers of the playlist module, and I hope to talk to them more about it soon.

I also happened to meet "playlist maven" Lucas Gonze of WebJay.com at the Open Media Developers Summit, and may pick his brain about the function and culture around playlists -- as well as best practices for tracking related and derivative playlists.

So with that, I'll share the e-mail and comment below that I just sent off to OurMedia.org's JD Lasica (whom I also had the chance to meet at the summit)...

kentbye's picture

Screenshots of User Interface for Distributed Editing

| | | | |

I have some preliminary screenshots for what the volunteers will see when they help order sound bites into sequences.

This has been some of my first Drupal development, and I'm sure that this interface will continue to evolve -- but I just want to show what I have so far.

More below...

Syndicate content