W3C

Media Source Extensions

W3C Editor's Draft 05 March 2013

This version:
http://dvcs.w3.org/hg/html-media/raw-file/default/media-source/media-source.html
Latest published version:
http://www.w3.org/TR/media-source/
Latest editor's draft:
http://dvcs.w3.org/hg/html-media/raw-file/default/media-source/media-source.html
Editors:
Aaron Colwell, Google Inc.
Adrian Bateman, Microsoft Corporation
Mark Watson, Netflix Inc.

Abstract

This specification extends HTMLMediaElement to allow JavaScript to generate media streams for playback. Allowing JavaScript to generate streams facilitates a variety of use cases like adaptive streaming and time shifting live streams.

Status of This Document

This section describes the status of this document at the time of its publication. Other documents may supersede this document. A list of current W3C publications and the latest revision of this technical report can be found in the W3C technical reports index at http://www.w3.org/TR/.

The working groups maintains a list of all bug reports that the editors have not yet tried to address. This draft highlights some of the pending issues that are still to be discussed in the working group. No decision has been taken on the outcome of these issues including whether they are valid.

Implementors should be aware that this specification is not stable. Implementors who are not taking part in the discussions are likely to find the specification changing out from under them in incompatible ways. Vendors interested in implementing this specification before it eventually reaches the Candidate Recommendation stage should join the mailing list mentioned below and take part in the discussions.

This document was published by the HTML Working Group as an Editor's Draft. If you wish to make comments regarding this document, please send them to public-html-media@w3.org (subscribe, archives). All feedback is welcome.

Publication as an Editor's Draft does not imply endorsement by the W3C Membership. This is a draft document and may be updated, replaced or obsoleted by other documents at any time. It is inappropriate to cite this document as other than work in progress.

This document was produced by a group operating under the 5 February 2004 W3C Patent Policy. W3C maintains a public list of any patent disclosures made in connection with the deliverables of the group; that page also includes instructions for disclosing a patent. An individual who has actual knowledge of a patent which the individual believes contains Essential Claim(s) must disclose the information in accordance with section 6 of the W3C Patent Policy.

Table of Contents

1. Introduction

This specification allows JavaScript to dynamically construct media streams for <audio> and <video>. It defines objects that allow JavaScript to pass media segments to an HTMLMediaElement. A buffering model is also included to describe how the user agent should act when different media segments are appended at different times. Byte stream specifications for WebM, ISO Base Media File Format, and MPEG-2 Transport Streams are given to specify the expected format of byte streams used with these extensions.

Media Source Pipeline Model Diagram

1.1 Goals

This specification was designed with the following goals in mind:

1.2 Definitions

Initialization Segment

A sequence of bytes that contain all of the initialization information required to decode a sequence of media segments. This includes codec initialization data, Track ID mappings for multiplexed segments, and timestamp offsets (e.g. edit lists).

Note

The byte stream format specifications contain format specific examples.

Media Segment

A sequence of bytes that contain packetized & timestamped media data for a portion of the presentation timeline. Media segments are always associated with the most recently appended initialization segment.

Note

The byte stream format specifications contain format specific examples.

Random Access Point

A position in a media segment where decoding and continuous playback can begin without relying on any previous data in the segment. For video this tends to be the location of I-frames. In the case of audio, most audio frames can be treated as a random access point. Since video tracks tend to have a more sparse distribution of random access points, the location of these points are usually considered the random access points for multiplexed streams.

Presentation Start Time

The presentation start time is the earliest time point in the presentation and specifies the initial playback position and earliest possible position. All presentations created using this specification have a presentation start time of 0.

MediaSource object URL

A MediaSource object URL is a unique Blob URI created by createObjectURL(). It is used to attach a MediaSource object to an HTMLMediaElement.

These URLs are the same as what the File API specification calls a Blob URI, except that anything in the definition of that feature that refers to File and Blob objects is hereby extended to also apply to MediaSource objects.

Track ID

A Track ID is a byte stream format specific identifier that marks sections of the byte stream as being part of a specific track. The Track ID in a track description identifies which sections of a media segment belong to that track.

Track Description

A byte stream format specific structure that provides the Track ID, codec configuration, and other metadata for a single track. Each track description inside a single initialization segment must have a unique Track ID.

Coded Frame

A unit of compressed media data that has a presentation timestamp and decode timestamp. The presentation timestamp indicates when the frame should be rendered. The decode timestamp indicates when the frame needs to be decoded. If frames can be decoded out of order, then the decode timestamp must be present in the bytestream. If frames cannot be decoded out of order and a decode timestamp is not present in the bytestream, then the decode timestamp is equal to the presentation timestamp.

Parent Media Source
The parent media source of a SourceBuffer object is the MediaSource object that created it.
Append Sequence
A series of appendBuffer() or appendStream() calls on a SourceBuffer without any intervening abort() calls. The media segments in an append sequence must be adjacent and monotonically increasing in time without any gaps. An abort() call starts a new append sequence which allows media segments to be appended in non-monotonically increasing order.

2. MediaSource Object

The MediaSource object represents a source of media data for an HTMLMediaElement. It keeps track of the readyState for this source as well as a list of SourceBuffer objects that can be used to add media data to the presentation. MediaSource objects are created by the web application and then attached to an HTMLMediaElement. The application uses the SourceBuffer objects in sourceBuffers to add media data to this source. The HTMLMediaElement fetches this media data from the MediaSource object when it is needed during playback.

enum ReadyState {
    "closed",
    "open",
    "ended"
};
Enumeration description
closed Indicates the source is not currently attached to a media element.
open The source has been opened by a media element and is ready for data to be appended to the SourceBuffer objects in sourceBuffers.
ended The source is still attached to a media element, but endOfStream() has been called.
enum EndOfStreamError {
    "network",
    "decode"
};
Enumeration description
network

Terminates playback and signals that a network error has occured.

Note

If the JavaScript fetching media data encounters a network error it should use this status code to terminate playback.

decode

Terminates playback and signals that a decoding error has occured.

Note

If the JavaScript code fetching media data has problems parsing the data it should use this status code to terminate playback.

[Constructor]
interface MediaSource : EventTarget {
    readonly attribute SourceBufferList    sourceBuffers;
    readonly attribute SourceBufferList    activeSourceBuffers;
    readonly attribute ReadyState          readyState;
             attribute unrestricted double duration;
    SourceBuffer   addSourceBuffer (DOMString type);
    void           removeSourceBuffer (SourceBuffer sourceBuffer);
    void           endOfStream (optional EndOfStreamError error);
    static boolean isTypeSupported (DOMString type);
};

2.1 Attributes

activeSourceBuffers of type SourceBufferList, readonly

Contains the subset of sourceBuffers that are providing the selected video track, the enabled audio tracks, and the "showing" or "hidden" text tracks.

Note

The Changes to selected/enabled track state section describes how this attribute gets updated.

duration of type unrestricted double

Allows the web application to set the presentation duration. The duration is initially set to NaN when the MediaSource object is created.

On getting, run the following steps:

  1. If the readyState attribute is "closed" then return NaN and abort these steps.
  2. Return the current value of the attribute.

On setting, run the following steps:

  1. If the value being set is negative or NaN then throw an INVALID_ACCESS_ERR exception and abort these steps.
  2. If the readyState attribute is not "open" then throw an INVALID_STATE_ERR exception and abort these steps.
  3. If the updating attribute equals true on any SourceBuffer in sourceBuffers, then throw an INVALID_STATE_ERR exception and abort these steps.
  4. Run the duration change algorithm with new duration set to the value being assigned to this attribute.
    Note

    appendBuffer(), appendStream() and endOfStream() can update the duration under certain circumstances.

readyState of type ReadyState, readonly

Indicates the current state of the MediaSource object. When the MediaSource is created readyState must be set to "closed".

sourceBuffers of type SourceBufferList, readonly
Contains the list of SourceBuffer objects associated with this MediaSource. When readyState equals "closed" this list will be empty. Once readyState transitions to "open" SourceBuffer objects can be added to this list by using addSourceBuffer().

2.2 Methods

addSourceBuffer

Adds a new SourceBuffer to sourceBuffers.

ParameterTypeNullableOptionalDescription
typeDOMString
Return type: SourceBuffer

When this method is invoked, the user agent must run the following steps:

  1. If type is null or an empty string then throw an INVALID_ACCESS_ERR exception and abort these steps.
  2. If type contains a MIME type that is not supported or contains a MIME type that is not supported with the types specified for the other SourceBuffer objects in sourceBuffers, then throw a NOT_SUPPORTED_ERR exception and abort these steps.
  3. If the user agent can't handle any more SourceBuffer objects then throw a QUOTA_EXCEEDED_ERR exception and abort these steps.
    Note

    For example, a user agent may throw a QUOTA_EXCEEDED_ERR exception if the media element has reached the HAVE_METADATA readyState. This can occur if the user agent's media engine does not support adding more tracks during playback.

  4. If the readyState attribute is not in the "open" state then throw an INVALID_STATE_ERR exception and abort these steps.
  5. Create a new SourceBuffer object and associated resources.
  6. Add the new object to sourceBuffers and queue a task to fire a simple event named addsourcebuffer at sourceBuffers.
  7. Return the new object.
endOfStream

Signals the end of the stream.

ParameterTypeNullableOptionalDescription
errorEndOfStreamError
Return type: void

When this method is invoked, the user agent must run the following steps:

  1. If the readyState attribute is not in the "open" state then throw an INVALID_STATE_ERR exception and abort these steps.
  2. If the updating attribute equals true on any SourceBuffer in sourceBuffers, then throw an INVALID_STATE_ERR exception and abort these steps.
  3. Change the readyState attribute value to "ended".
  4. Queue a task to fire a simple event named sourceended at the MediaSource.
  5. If error is not set, null, or an empty string
    1. Run the duration change algorithm with new duration set to the highest end timestamp across all SourceBuffer objects in sourceBuffers.
      Note

      This allows the duration to properly reflect the end of the appended media segments. For example, if the duration was explicitly set to 10 seconds and only media segments for 0 to 5 seconds were appended before endOfStream() was called, then the duration will get updated to 5 seconds.

    2. Notify the media element that it now has all of the media data. Playback should continue until all the media passed in via appendBuffer() and appendStream() has been played.
    If error is set to "network"
    If the HTMLMediaElement.readyState attribute equals HAVE_NOTHING
    Run the steps of the resource fetch algorithm.
    If the HTMLMediaElement.readyState attribute is greater than HAVE_NOTHING
    Run the "If the connection is interrupted after some media data has been received, causing the user agent to give up trying to fetch the resource" steps of the resource fetch algorithm.
    If error is set to "decode"
    If the HTMLMediaElement.readyState attribute equals HAVE_NOTHING
    Run the "If the media data can be fetched but is found by inspection to be in an unsupported format, or can otherwise not be rendered at all" steps of the resource fetch algorithm.
    If the HTMLMediaElement.readyState attribute is greater than HAVE_NOTHING
    Run the media data is corrupted steps of the resource fetch algorithm.
    Otherwise
    Throw an INVALID_ACCESS_ERR exception.
isTypeSupported, static

Check to see whether the MediaSource is capable of creating SourceBuffer objects for the the specified MIME type.

Note

If true is returned from this method, it only indicates that the MediaSource implementation is capable of creating SourceBuffer objects for the specified MIME type. An addSourceBuffer() call may still fail if sufficient resources are not available to support the addition of a new SourceBuffer.

Note

This method returning true implies that HTMLMediaElement.canPlayType() will return "maybe" or "probably" since it does not make sense for a MediaSource to support a type the HTMLMediaElement knows it cannot play.

ParameterTypeNullableOptionalDescription
typeDOMString
Return type: boolean

When this method is invoked, the user agent must run the following steps:

  1. If type is an empty string, then return false.
  2. If type does not contain a valid MIME type string, then return false.
  3. If type contains a media type or media subtype that the MediaSource does not support, then return false.
  4. If type contains at a codec that the MediaSource does not support, then return false.
  5. If the MediaSource does not support the specified combination of media type, media subtype, and codecs then return false.
  6. Return true.
removeSourceBuffer

Removes a SourceBuffer from sourceBuffers.

ParameterTypeNullableOptionalDescription
sourceBufferSourceBuffer
Return type: void

When this method is invoked, the user agent must run the following steps:

  1. If sourceBuffer is null then throw an INVALID_ACCESS_ERR exception and abort these steps.
  2. If sourceBuffer specifies an object that is not in sourceBuffers then throw a NOT_FOUND_ERR exception and abort these steps.
  3. If the sourceBuffer.updating attribute equals true, then run the following steps:
    1. Abort the stream append loop algorithm if it is running.
    2. Set the sourceBuffer.updating attribute to false.
    3. Queue a task to fire a simple event named abort at sourceBuffer.
    4. Queue a task to fire a simple event named updateend at sourceBuffer.
  4. Set the the sourceBuffer attribute in all tracks in sourceBuffer.audioTracks, sourceBuffer.videoTracks, and sourceBuffer.textTracks to null.
  5. Remove all the tracks in sourceBuffer.audioTracks, sourceBuffer.videoTracks, and sourceBuffer.textTracks from the respective audioTracks, videoTracks, and textTracks attributes on the HTMLMediaElement.
  6. Remove all the tracks in sourceBuffer.audioTracks, sourceBuffer.videoTracks, and sourceBuffer.textTracks and queue a task to fire a simple event named removetrack at the modified lists.
  7. Queue a task to fire a simple event named removetrack at the HTMLMediaElement track lists that were modified.
  8. If sourceBuffer is in activeSourceBuffers, then run the following steps:
    1. Remove sourceBuffer from activeSourceBuffers.
    2. Queue a task to fire a simple event named removesourcebuffer at activeSourceBuffers.
    3. If the selected video track was removed from the videoTracks attribute on the HTMLMediaElement in a step above, then queue a task to fire a simple event named change at the videoTracks attribute.
    4. If an enabled audio track was removed from the audioTracks attribute on the HTMLMediaElement in a step above, then queue a task to fire a simple event named change at the audioTracks attribute.
    5. If a TextTrack with its mode attribute set to "showing" or "hidden" was removed from the textTracks attribute on the HTMLMediaElement in a step above, then queue a task to fire a simple event named change at the textTracks attribute.
  9. Remove sourceBuffer from sourceBuffers and queue a task to fire a simple event named removesourcebuffer at sourceBuffers.
  10. Destroy all resources for sourceBuffer.

2.3 Event Summary

Event name Interface Dispatched when...
sourceopen Event readyState transitions from "closed" to "open" or from "ended" to "open".
sourceended Event readyState transitions from "open" to "ended".
sourceclose Event readyState transitions from "open" to "closed" or "ended" to "closed".

2.4 Algorithms

2.4.1 Attaching to a media element

A MediaSource object can be attached to a media element by assigning a MediaSource object URL to the media element src attribute or the src attribute of a <source> inside a media element. A MediaSource object URL is created by passing a MediaSource object to createObjectURL().

If the resource fetch algorithm absolute URL matches the MediaSource object URL, run the following steps right before the "Perform a potentially CORS-enabled fetch" step in the resource fetch algorithm.

If readyState is NOT set to "closed"
Run the steps of the resource fetch algorithm.
Otherwise
  1. Set the readyState attribute to "open".
  2. Queue a task to fire a simple event named sourceopen at the MediaSource.
  3. Allow the resource fetch algorithm to progress based on data passed in via appendBuffer() and appendStream().

2.4.2 Detaching from a media element

The following steps are run in any case where the media element is going to transition to NETWORK_EMPTY and queue a task to fire a simple event named emptied at the media element. These steps should be run right before the transition.

  1. Set the readyState attribute to "closed".
  2. Set the duration attribute to NaN.
  3. Remove all the SourceBuffer objects from activeSourceBuffers.
  4. Queue a task to fire a simple event named removesourcebuffer at activeSourceBuffers.
  5. Remove all the SourceBuffer objects from sourceBuffers.
  6. Queue a task to fire a simple event named removesourcebuffer at sourceBuffers.
  7. Queue a task to fire a simple event named sourceclose at the MediaSource.

2.4.3 Seeking

Run the following steps as part of the "Wait until the user agent has established whether or not the media data for the new playback position is available, and, if it is, until it has decoded enough data to play back that position" step of the seek algorithm:

  1. The media element looks for media segments containing the new playback position in each SourceBuffer object in activeSourceBuffers.
    If one or more of the objects in activeSourceBuffers is missing media segments for the new playback position
    1. Set the HTMLMediaElement.readyState attribute to HAVE_METADATA.
    2. The media element waits for the necessary media segments to be passed to appendBuffer() or appendStream().
      Note

      The web application can use buffered to determine what the media element needs to resume playback.

    Otherwise
    Continue
  2. The media element resets all decoders and initializes each one with data from the appropriate initialization segment.
  3. The media element feeds data from the media segments into the decoders until the new playback position is reached.
  4. Resume the seek algorithm at the "Await a stable state" step.

2.4.4 SourceBuffer Monitoring

The following steps are periodically run during playback to make sure that all of the SourceBuffer objects in activeSourceBuffers have enough data to ensure uninterrupted playback. Appending new segments and changes to activeSourceBuffers also cause these steps to run because they affect the conditions that trigger state transitions.

Having enough data to ensure uninterrupted playback is an implementation specific condition where the user agent determines that it currently has enough data to play the presentation without stalling for a meaningful period of time. This condition is constantly evaluated to determine when to transition the media element into and out of the HAVE_ENOUGH_DATA ready state. These transitions indicate when the user agent believes it has enough data buffered or it needs more data respectively.

Note

An implementation may choose to use bytes buffered, time buffered, the append rate, or any other metric it sees fit to determine when it has enough data. The metrics used may change during playback so web applications should only rely on the value of HTMLMediaElement.readyState to determine whether more data is needed or not.

Note

When the media element needs more data, it should transition from HAVE_ENOUGH_DATA to HAVE_FUTURE_DATA early enough for a web application to be able to respond without causing an interruption in playback. For example, transitioning when the current playback position is 500ms before the end of the buffered data gives the application roughly 500ms to append more data before playback stalls.

If buffered for all objects in activeSourceBuffers do not contain TimeRanges for the current playback position:
  1. Set the HTMLMediaElement.readyState attribute to HAVE_METADATA.
  2. If this is the first transition to HAVE_METADATA, then queue a task to fire a simple event named loadedmetadata at the media element.
  3. Abort these steps.
If buffered for all objects in activeSourceBuffers contain TimeRanges that include the current playback position and enough data to ensure uninterrupted playback:
  1. Set the HTMLMediaElement.readyState attribute to HAVE_ENOUGH_DATA.
  2. Queue a task to fire a simple event named canplaythrough at the media element.
  3. Playback may resume at this point if it was previously suspended by a transition to HAVE_CURRENT_DATA.
  4. Abort these steps.
If buffered for at least one object in activeSourceBuffers contains a TimeRange that includes the current playback position but not enough data to ensure uninterrupted playback:
  1. Set the HTMLMediaElement.readyState attribute to HAVE_FUTURE_DATA.
  2. If the previous value of HTMLMediaElement.readyState was less than HAVE_FUTURE_DATA, then queue a task to fire a simple event named canplay at the media element.
  3. Playback may resume at this point if it was previously suspended by a transition to HAVE_CURRENT_DATA.
  4. Abort these steps.
If buffered for at least one object in activeSourceBuffers contains a TimeRange that ends at the current playback position and does not have a range covering the time immediately after the current position:
  1. Set the HTMLMediaElement.readyState attribute to HAVE_CURRENT_DATA.
  2. If this is the first transition to HAVE_CURRENT_DATA, then queue a task to fire a simple event named loadeddata at the media element.
  3. Playback is suspended at this point since the media element doesn't have enough data to advance the timeline.
  4. Abort these steps.

2.4.5 Changes to selected/enabled track state

During playback activeSourceBuffers needs to be updated if the selected video track, the enabled audio tracks, or a text track mode changes. When one or more of these changes occur the following steps need to be followed.

If the selected video track changes, then run the following steps:
  1. If the SourceBuffer associated with the previously selected video track is not associated with any other enabled tracks, run the following steps:
    1. Remove the SourceBuffer from activeSourceBuffers.
    2. Queue a task to fire a simple event named removesourcebuffer at activeSourceBuffers
  2. If the SourceBuffer associated with the newly selected video track is not already in activeSourceBuffers, run the following steps:
    1. Add the SourceBuffer to activeSourceBuffers.
    2. Queue a task to fire a simple event named addsourcebuffer at activeSourceBuffers
If an audio track becomes disabled and the SourceBuffer associated with this track is not associated with any other enabled or selected track, then run the following steps:
  1. Remove the SourceBuffer associated with the audio track from activeSourceBuffers
  2. Queue a task to fire a simple event named removesourcebuffer at activeSourceBuffers
If an audio track becomes enabled and the SourceBuffer associated with this track is not already in activeSourceBuffers, then run the following steps:
  1. Add the SourceBuffer associated with the audio track to activeSourceBuffers
  2. Queue a task to fire a simple event named addsourcebuffer at activeSourceBuffers
If a text track mode becomes "disabled" and the SourceBuffer associated with this track is not associated with any other enabled or selected track, then run the following steps:
  1. Remove the SourceBuffer associated with the text track from activeSourceBuffers
  2. Queue a task to fire a simple event named removesourcebuffer at activeSourceBuffers
If a text track mode becomes "showing" or "hidden" and the SourceBuffer associated with this track is not already in activeSourceBuffers, then run the following steps:
  1. Add the SourceBuffer associated with the text track to activeSourceBuffers
  2. Queue a task to fire a simple event named addsourcebuffer at activeSourceBuffers

2.4.6 Duration change

Follow these steps when duration needs to change to a new duration.

  1. If the current value of duration is equal to new duration, then abort these steps.
  2. Set old duration to the current value of duration.
  3. Update duration to new duration.
  4. If the new duration is less than old duration, then call remove(new duration, old duration) on all objects in sourceBuffers.
    Note

    This preserves audio frames that start before and end after the duration. The user agent must end playback at duration even if the audio frame extends beyond this time.

  5. Update the media controller duration to new duration and run the HTMLMediaElement duration change algorithm.

3. SourceBuffer Object

enum AbortMode {
    "continuation",
    "timestampOffset"
};
Enumeration description
continuation

The next append sequence will be placed immediately after the append sequence that was just aborted.

timestampOffset

The next append sequence will be inserted at the presentation time specified by the timestampOffset attribute instead of the time computed from the timestampOffset attribute and coded frame timestamps.

Note

These abort modes cause the timestampOffset attribute to get updated when the first coded frame of the new append sequence is appended. This allows the rest of the coded frames in the sequence to follow the normal presentation & decode timestamp computation rules and provides a way for the application to observe what offset is being applied to these timestamps.

interface SourceBuffer : EventTarget {
    readonly attribute boolean        updating;
    readonly attribute TimeRanges     buffered;
             attribute double         timestampOffset;
    readonly attribute AudioTrackList audioTracks;
    readonly attribute VideoTrackList videoTracks;
    readonly attribute TextTrackList  textTracks;
    void appendBuffer (ArrayBuffer data);
    void appendBuffer (ArrayBufferView data);
    void appendStream (Stream stream, optional unsigned long long maxSize);
    void abort (optional AbortMode mode);
    void remove (double start, double end);
};

3.1 Attributes

audioTracks of type AudioTrackList, readonly
The list of AudioTrack objects created by this object.
buffered of type TimeRanges, readonly

Indicates what TimeRanges are buffered in the SourceBuffer.

When the attribute is read the following steps must occur:

  1. If this object has been removed from the sourceBuffers attribute of the parent media source then throw an INVALID_STATE_ERR exception and abort these steps.
  2. Return a new static normalized TimeRanges object for the media segments buffered.
textTracks of type TextTrackList, readonly
The list of TextTrack objects created by this object.
timestampOffset of type double

Controls the offset applied to timestamps inside subsequent media segments that are appended to this SourceBuffer. The timestampOffset is initially set to 0 which indicates that no offset is being applied.

On getting, Return the initial value or the last value that was successfully set.

On setting, run the following steps:

  1. If this object has been removed from the sourceBuffers attribute of the parent media source, then throw an INVALID_STATE_ERR exception and abort these steps.
  2. If the updating attribute equals true, then throw an INVALID_STATE_ERR exception and abort these steps.
  3. If the readyState attribute of the parent media source is in the "ended" state then run the following steps:

    1. Set the readyState attribute of the parent media source to "open"
    2. Queue a task to fire a simple event named sourceopen at the parent media source.
  4. If this object is waiting for the end of a media segment to be appended, then throw an INVALID_STATE_ERR and abort these steps.
  5. Update the attribute to the new value.
updating of type boolean, readonly

Indicates whether an appendBuffer(), appendStream(), or remove() operation is still being processed.

videoTracks of type VideoTrackList, readonly
The list of VideoTrack objects created by this object.

3.2 Methods

abort

Aborts the current segment and resets the segment parser.

ParameterTypeNullableOptionalDescription
modeAbortMode
Return type: void

When this method is invoked, the user agent must run the following steps:

  1. If this object has been removed from the sourceBuffers attribute of the parent media source then throw an INVALID_STATE_ERR exception and abort these steps.
  2. If the readyState attribute of the parent media source is not in the "open" state then throw an INVALID_STATE_ERR exception and abort these steps.
  3. If mode is set and does not equal null, an empty string, or a valid AbortMode, then throw an INVALID_ACCESS_ERR exception and abort these steps.
  4. If the continuation timestamp is unset, then run the following steps:
    1. If mode equals "continuation" and the highest presentation end timestamp is unset, then throw an INVALID_STATE_ERR exception and abort these steps.
    2. If the highest presentation end timestamp is set, then update the continuation timestamp to equal the highest presentation end timestamp.
  5. If the updating attribute equals true, then run the following steps:
    1. Abort the stream append loop algorithm if it is running.
    2. Set the updating attribute to false.
    3. Queue a task to fire a simple event named abort at this SourceBuffer object.
    4. Queue a task to fire a simple event named updateend at this SourceBuffer object.
  6. If mode is not set, null, or an empty string:
    Unset the abort mode.
    Otherwise:
    Update the abort mode to equal mode.
  7. Run the reset parser state algorithm.
appendBuffer

Appends the segment data in an ArrayBuffer to the source buffer.

The steps for this method are the same as the ArrayBufferView version of appendBuffer().

ParameterTypeNullableOptionalDescription
dataArrayBuffer
Return type: void
appendBuffer

Appends the segment data in an ArrayBufferView to the source buffer.

ParameterTypeNullableOptionalDescription
dataArrayBufferView
Return type: void

When this method is invoked, the user agent must run the following steps:

  1. If data is null then throw an INVALID_ACCESS_ERR exception and abort these steps.
  2. If this object has been removed from the sourceBuffers attribute of the parent media source then throw an INVALID_STATE_ERR exception and abort these steps.
  3. If the updating attribute equals true, then throw an INVALID_STATE_ERR exception and abort these steps.
  4. If the readyState attribute of the parent media source is in the "ended" state then run the following steps:

    1. Set the readyState attribute of the parent media source to "open"
    2. Queue a task to fire a simple event named sourceopen at the parent media source .
  5. If the buffer full flag equals true, then throw a QUOTA_EXCEEDED_ERR exception and abort these step.

    Note

    The web application must use remove() to free up space in the SourceBuffer.

  6. Add data to the end of the input buffer.
  7. Set the updating attribute to true.
  8. Queue a task to fire a simple event named updatestart at this SourceBuffer object.
  9. Asynchronously run the segment parser loop algorithm.
  10. When the segment parser loop returns control to this algorithm, run the remaining steps.
  11. Set the updating attribute to false.
  12. Queue a task to fire a simple event named update at this SourceBuffer object.
  13. Queue a task to fire a simple event named updateend at this SourceBuffer object.
appendStream

Appends segment data to the source buffer from a Stream.

ParameterTypeNullableOptionalDescription
streamStream
maxSizeunsigned long long
Return type: void

When this method is invoked, the user agent must run the following steps:

  1. If stream is null then throw an INVALID_ACCESS_ERR exception and abort these steps.
  2. If this object has been removed from the sourceBuffers attribute of the parent media source then throw an INVALID_STATE_ERR exception and abort these steps.
  3. If the updating attribute equals true, then throw an INVALID_STATE_ERR exception and abort these steps.
  4. If the readyState attribute of the parent media source is in the "ended" state then run the following steps:

    1. Set the readyState attribute of the parent media source to "open"
    2. Queue a task to fire a simple event named sourceopen at the parent media source .
  5. If the buffer full flag equals true, then throw a QUOTA_EXCEEDED_ERR exception and abort these step.

    Note

    The web application must use remove() to free up space in the SourceBuffer.

  6. Set the updating attribute to true.
  7. Queue a task to fire a simple event named updatestart at this SourceBuffer object.
  8. Asynchronously run the stream append loop algorithm with stream and maxSize.
remove

Removes media for a specific time range.

ParameterTypeNullableOptionalDescription
startdouble
enddouble
Return type: void

When this method is invoked, the user agent must run the following steps:

  1. If start is negative or greater than duration, then throw an INVALID_ACCESS_ERR exception and abort these steps.
  2. If end is less than or equal to start, then throw an INVALID_ACCESS_ERR exception and abort these steps.
  3. If this object has been removed from the sourceBuffers attribute of the parent media source then throw an INVALID_STATE_ERR exception and abort these steps.
  4. If the updating attribute equals true, then throw an INVALID_STATE_ERR exception and abort these steps.
  5. If the readyState attribute of the parent media source is not in the "open" state then throw an INVALID_STATE_ERR exception and abort these steps.
  6. Set the updating attribute to true.
  7. Queue a task to fire a simple event named updatestart at this SourceBuffer object.
  8. Return control to the caller and run the rest of the steps asynchronously.
  9. For each track buffer in this source buffer, run the following steps:

    1. Let remove end timestamp be the current value of duration
    2. If this track buffer has a random access point timestamp that is greater than or equal to end, then update remove end timestamp to that timestamp.

      Note

      Random access point timestamps can be different across tracks because the dependencies between coded frames within a track are usually different than the dependencies in another track.

    3. Remove all media data, from this track buffer, that contain starting timestamps greater than or equal to start and less than the remove end timestamp.
    4. If this object is in activeSourceBuffers, the current playback position is greater than or equal to start and less than the remove end timestamp, and HTMLMediaElement.readyState is greater than HAVE_METADATA, then set the HTMLMediaElement.readyState attribute to HAVE_METADATA and stall playback.

      Note

      This transition occurs because media data for the current position has been removed. Playback cannot progress until media for the current playback position is appended or the selected/enabled tracks change.

  10. If buffer full flag equals true and this object is ready to accept more bytes, then set the buffer full flag to false.
  11. Set the updating attribute to false.
  12. Queue a task to fire a simple event named update at this SourceBuffer object.
  13. Queue a task to fire a simple event named updateend at this SourceBuffer object.

3.3 Track Buffers

A track buffer stores the track descriptions and coded frames for an individual track. The track buffer is updated as initialization segments and media segments are appended to the SourceBuffer.

Each track buffer has a last decode timestamp variable that stores the decode timestamp of the last coded frame appended in the current append sequence. The variable is initially unset to indicate that no coded frames have been appended yet.

Each track buffer has a highest presentation timestamp variable that stores the highest presentation timestamp encountered in a coded frame appended in the current append sequence. The variable is initially unset to indicate that no coded frames have been appended yet.

Each track buffer has a need random access point flag variable that keeps track of whether the track buffer is waiting for a random access point coded frame. The variable is initially set to true to indicate that random access point coded frame is needed before anything can be added to the track buffer.

3.4 Event Summary

Event name Interface Dispatched when...
updatestart Event updating transitions from false to true.
update Event The append or remove has successfully completed. updating transitions from true to false.
updateend Event The append or remove has ended.
error Event An error occurred during the append. updating transitions from true to false.
abort Event The append or remove was aborted by an abort() call. updating transitions from true to false.

3.5 Algorithms

3.5.1 Segment Parser Loop

All SourceBuffer objects have an internal append state variable that keeps track of the high-level segment parsing state. It is initially set to WAITING_FOR_SEGMENT and can transition to the following states as data is appended.

Append state name Description
WAITING_FOR_SEGMENT Waiting for the start of an initialization segment or media segment to be appended.
PARSING_INIT_SEGMENT Currently parsing an initialization segment.
PARSING_MEDIA_SEGMENT Currently parsing a media segment.

The input buffer is a byte buffer that is used to hold unparsed bytes across appendBuffer() and appendStream() calls. The buffer is empty when the SourceBuffer object is created.

The buffer full flag keeps track of whether appendBuffer() or appendStream() is allowed to accept more bytes. It is set to false when the SourceBuffer object is created and gets updated as data is appended and removed.

The abort mode variable keeps track of the AbortMode passed to the last abort() call. It is unset when the SourceBuffer object is created and gets updated by abort() and the coded frame processing algorithm.

The continuation timestamp variable keeps track of the start timestamp for the next append sequence if abort() is called with "continuation". It is unset when the SourceBuffer object is created and gets updated by abort() and the coded frame processing algorithm.

The highest presentation end timestamp variable stores the highest presentation end timestamp encountered in the current append sequence. It is unset when the SourceBuffer object is created and gets updated by the reset parser state algorithm and the coded frame processing algorithm.

When this algorithm is invoked, run the following steps:

  1. Loop Top: If the input buffer is empty, then jump to the need more data step below.
  2. If the input buffer starts with bytes that violate the byte stream format specifications, then run the append error algorithm and abort this algorithm.
  3. Remove any bytes that the byte stream format specifications say should be ignored from the start of the input buffer.
  4. If the append state equals WAITING_FOR_SEGMENT, then run the following steps:

    1. If the beginning of the input buffer indicates the start of an initialization segment, set the append state to PARSING_INIT_SEGMENT.
    2. If the beginning of the input buffer indicates the start of an media segment, set append state to PARSING_MEDIA_SEGMENT.
    3. Jump to the loop top step above.
  5. If the append state equals PARSING_INIT_SEGMENT, then run the following steps:

    1. If the input buffer does not contain a complete initialization segment yet, then jump to the need more data step below.
    2. Run the initialization segment received algorithm.
    3. Remove the initialization segment bytes from the beginning of the input buffer.
    4. Set append state to WAITING_FOR_SEGMENT.
    5. Jump to the loop top step above.
  6. If the append state equals PARSING_MEDIA_SEGMENT, then run the following steps:

    1. If the first initialization segment flag is false, then run the append error algorithm and abort this algorithm.
    2. If the input buffer does not contain a complete media segment header yet, then jump to the need more data step below.

      Note

      Implementations may choose to implement this state as an incremental parser so that it is not necessary to have the entire media segment before running the coded frame processing algorithm.

    3. Run the coded frame processing algorithm.
    4. Remove the media segment bytes from the beginning of the input buffer.
    5. If this SourceBuffer is full and cannot accept more media data, then set the buffer full flag to true.
    6. Set append state to WAITING_FOR_SEGMENT.

      Note

      Incremental parsers should only do this transition after the entire media segment has been received.

    7. Jump to the loop top step above.
  7. Need more data: Return control to the calling algorithm.

3.5.2 Reset Parser State

When the parser state needs to be reset, run the following steps:

  1. If the append state equals PARSING_MEDIA_SEGMENT and the input buffer contains some complete coded frames, then run the coded frame processing algorithm as if the media segment only contained these frames.
  2. Unset the last decode timestamp on all track buffers.
  3. Unset the highest presentation timestamp on all track buffers.
  4. Unset the highest presentation end timestamp.
  5. Set the need random access point flag on all track buffers to true.
  6. Remove all bytes from the input buffer.
  7. Set append state to WAITING_FOR_SEGMENT.

3.5.3 Append Error

When an error occurs during an append, run the following steps:

  1. Run the reset parser state algorithm.
  2. Abort the stream append loop algorithm if it is running.
  3. Set the updating attribute to false.
  4. Queue a task to fire a simple event named error at this SourceBuffer object.
    Issue 1

    Need a way to convey error information.

  5. Queue a task to fire a simple event named updateend at this SourceBuffer object.

3.5.4 Stream Append Loop

When a Stream is passed to appendStream(), the following steps are run to transfer data from the Stream to the SourceBuffer. This algorithm is initialized with the stream and maxSize parameters from the appendStream() call.

  1. If maxSize is set, then let bytesLeft equal maxSize.
  2. Loop Top: If maxSize is set and bytesLeft equals 0, then jump to the loop done step below.
  3. If stream has been closed, then jump to the loop done step below.
  4. If the buffer full flag equals true, then run the append error algorithm and abort this algorithm.

    Note

    The web application must use remove() to free up space in the SourceBuffer.

  5. Read data from stream into data:
    If maxSize is set:
    1. Read up to bytesLeft bytes from stream into data.
    2. Subtract the number of bytes in data from bytesLeft.
    Otherwise:
    Read all available bytes in stream into data.
  6. If an error occured while reading from stream, then run the append error algorithm and abort this algorithm.
  7. Add data to the end of the input buffer.
  8. Run the segment parser loop algorithm.
  9. Jump to the loop top step above.
  10. Loop Done: Set the updating attribute to false.
  11. Queue a task to fire a simple event named update at this SourceBuffer object.
  12. Queue a task to fire a simple event named updateend at this SourceBuffer object.

3.5.5 Initialization Segment Received

The following steps are run when the segment parser loop successfully parses a complete initialization segment:

Each SourceBuffer object has an internal first initialization segment flag that tracks whether the first initialization segment has been appended. This flag is set to false when the SourceBuffer is created and updated by the algorithm below.

  1. Update the duration attribute if it currently equals NaN:
    If the initialization segment contains a duration:
    Run the duration change algorithm with new duration set to the duration in the initialization segment.
    Otherwise:
    Run the duration change algorithm with new duration set to positive Infinity.
  2. If the initialization segment has no audio, video, or text tracks, then call endOfStream("decode") and abort these steps.
  3. If the first initialization segment flag is true, then run the following steps:
    1. Verify the following properties. If any of the checks fail then call endOfStream("decode") and abort these steps.
    2. Add the appropriate track descriptions from this initialization segment to each of the track buffers.
  4. Let active track flag equal false.
  5. If the first initialization segment flag is false, then run the following steps:

    1. For each audio track in the initialization segment, run following steps:

      1. Let new audio track be a new AudioTrack object.
      2. Generate a unique ID and assign it to the id property on new audio track.
      3. If audioTracks.length equals 0, then run the following steps:

        1. Set the enabled property on new audio track to true.
        2. Set active track flag to true.
      4. Add new audio track to the audioTracks attribute on this SourceBuffer object.
      5. Queue a task to fire a simple event named addtrack at audioTracks attribute on this SourceBuffer object.
      6. Add new audio track to the audioTracks attribute on the HTMLMediaElement.
      7. Queue a task to fire a simple event named addtrack at the audioTracks attribute on the HTMLMediaElement.
      8. Create a new track buffer to store coded frames for this track.
      9. Add the track description for this track to the track buffer.
    2. For each video track in the initialization segment, run following steps:

      1. Let new video track be a new VideoTrack object.
      2. Generate a unique ID and assign it to the id property on new video track.
      3. If videoTracks.length equals 0, then run the following steps:

        1. Set the selected property on new video track to true.
        2. Set active track flag to true.
      4. Add new video track to the videoTracks attribute on this SourceBuffer object.
      5. Queue a task to fire a simple event named addtrack at videoTracks attribute on this SourceBuffer object.
      6. Add new video track to the videoTracks attribute on the HTMLMediaElement.
      7. Queue a task to fire a simple event named addtrack at the videoTracks attribute on the HTMLMediaElement.
      8. Create a new track buffer to store coded frames for this track.
      9. Add the track description for this track to the track buffer.
    3. For each text track in the initialization segment, run following steps:

      1. Let new text track be a new TextTrack object with its properties populated with the appropriate information from the initialization segment.
      2. If the mode property on new text track equals "showing" or "hidden", then set active track flag to true.
      3. Add new text track to the textTracks attribute on this SourceBuffer object.
      4. Queue a task to fire a simple event named addtrack at textTracks attribute on this SourceBuffer object.
      5. Add new text track to the textTracks attribute on the HTMLMediaElement.
      6. Queue a task to fire a simple event named addtrack at the textTracks attribute on the HTMLMediaElement.
      7. Create a new track buffer to store coded frames for this track.
      8. Add the track description for this track to the track buffer.
    4. If active track flag equals true, then run the following steps:
      1. Add this SourceBuffer to activeSourceBuffers.
      2. Queue a task to fire a simple event named addsourcebuffer at activeSourceBuffers
    5. Set first initialization segment flag to true.
  6. If the HTMLMediaElement.readyState attribute is HAVE_NOTHING, then run the following steps:

    1. If one or more objects in sourceBuffers have first initialization segment flag set to false, then abort these steps.
    2. Set the HTMLMediaElement.readyState attribute to HAVE_METADATA.
    3. Queue a task to fire a simple event named loadedmetadata at the media element.
  7. If the active track flag equals true and the HTMLMediaElement.readyState attribute is greater than HAVE_CURRENT_DATA, then set the HTMLMediaElement.readyState attribute to HAVE_METADATA.

3.5.6 Coded Frame Processing

When complete coded frames have been parsed by the segment parser loop then the following steps are run:

  1. For each coded frame in the media segment run the following steps:

    1. Let presentation timestamp be a double precision floating point representation of the coded frame's presentation timestamp in seconds.
    2. Let decode timestamp be a double precision floating point representation of the coded frame's decode timestamp in seconds.
      Note

      Implementations don't have to internally store timestamps in a double precision floating point representation. This representation is used here because it is the represention for timestamps in the HTML spec. The intention here is to make the behavior clear without adding unnecessary complexity to the algorithm to deal with the fact that adding a timestampOffset may cause a timestamp rollover in the underlying timestamp representation used by the bytestream format. Implementations can use any internal timestamp representation they wish, but the addition of timestampOffset should behave in a similar manner to what would happen if a double precision floating point representation was used.

    3. Let frame duration be a double precision floating point representation of the coded frame's duration in seconds.
    4. If abort mode is set, then run the following steps:
      1. If abort mode equals "continuation":
        Set timestampOffset equal to continuation timestamp - presentation timestamp.
        If abort mode equals "timestampOffset":
        1. Let old timestampOffset equal the current value of timestampOffset.
        2. Set timestampOffset equal to old timestampOffset - presentation timestamp.
      2. Unset continuation timestamp.
      3. Unset abort mode.
    5. If timestampOffset is not 0, then run the following steps:

      1. Add timestampOffset to the presentation timestamp.
      2. Add timestampOffset to the decode timestamp.
      3. If the presentation timestamp or decode timestamp is less than the presentation start time, then call endOfStream("decode"), and abort these steps.
    6. Let track buffer equal the track buffer that the coded frame should be added to.
    7. If last decode timestamp for track buffer is set and decode timestamp is less than last decode timestamp or the difference between decode timestamp and last decode timestamp is greater than 100 milliseconds, then call endOfStream("decode") and abort these steps.
      Note

      These checks trigger an error when the application attempts out-of-order appends without an intervening abort().

    8. Let frame end timestamp equal the sum of presentation timestamp and frame duration.
    9. If the need random access point flag on track buffer equals true, then run the following steps:
      1. If the coded frame is not a random access point, then drop the coded frame and jump to the top of the loop to start processing the next coded frame.
      2. Set the need random access point flag on track buffer to false.
    10. Let spliced frame be an unset variable for holding audio splice information
    11. If last decode timestamp for track buffer is unset and presentation timestamp lies within a coded frame already stored in track buffer, then run the following steps:
      1. Let overlapped frame be the coded frame in track buffer that contains presentation timestamp.
      2. If track buffer contains audio coded frames, then run the audio splice frame algorithm and if a splice frame is returned, assign it to spliced frame.
      3. If track buffer contains video coded frames and presentation timestamp is less than 1 microsecond beyond the presentation timestamp of overlapped frame, then remove overlapped frame and any coded frames that depend on it from track buffer.
        Note

        This is to compensate for minor errors in frame timestamp computations that can appear when converting back and forth between double precision floating point numbers and rationals. This tolerance allows a frame to replace an existing one as long as it is within 1 microsecond of the existing frame's start time. Frames that come slightly before an existing frame are handled by the removal step below.

    12. Remove existing coded frames in track buffer:
      If highest presentation timestamp for track buffer is not set:
      Remove all coded frames from track buffer that have a presentation timestamp greater than or equal to presentation timestamp and less than frame end timestamp.
      If highest presentation timestamp for track buffer is set and less than presentation timestamp
      Remove all coded frames from track buffer that have a presentation timestamp greater than highest presentation timestamp and less than or equal to frame end timestamp.
    13. Remove all coded frames from track buffer that have decoding dependencies on the coded frames removed in the previous step.
      Note

      For example if an I-frame is removed in the previous step, then all P-frames & B-frames that depend on that I-frame should be removed from track buffer. This makes sure that decode dependencies are properly maintained during overlaps.

    14. If spliced frame is set:
      Add spliced frame to the track buffer.
      Otherwise:
      Add the coded frame with the presentation timestamp, decode timestamp, and frame duration to the track buffer.
    15. Set last decode timestamp for track buffer to decode timestamp.
    16. If highest presentation timestamp for track buffer is unset or frame end timestamp is greater than highest presentation timestamp, then set highest presentation timestamp for track buffer to frame end timestamp.
      Note

      The greater than check is needed because bidirectional prediction between coded frames can cause presentation timestamp to not be monotonically increasing eventhough the decode timestamps are monotonically increasing.

    17. If highest presentation end timestamp is unset or frame end timestamp is greater than highest presentation end timestamp, then set highest presentation end timestamp equal to frame end timestamp.
  2. If the HTMLMediaElement.readyState attribute is HAVE_METADATA and the new coded frames cause all objects in activeSourceBuffers to have media data for the current playback position, then run the following steps:

    1. Set the HTMLMediaElement.readyState attribute to HAVE_CURRENT_DATA.
    2. If this is the first transition to HAVE_CURRENT_DATA, then queue a task to fire a simple event named loadeddata at the media element.
  3. If the HTMLMediaElement.readyState attribute is HAVE_CURRENT_DATA and the new coded frames cause all objects in activeSourceBuffers to have media data beyond the current playback position, then run the following steps:

    1. Set the HTMLMediaElement.readyState attribute to HAVE_FUTURE_DATA.
    2. Queue a task to fire a simple event named canplay at the media element.
  4. If the HTMLMediaElement.readyState attribute is HAVE_FUTURE_DATA and the new coded frames cause all objects in activeSourceBuffers to have enough data to ensure uninterrupted playback, then run the following steps:

    1. Set the HTMLMediaElement.readyState attribute to HAVE_ENOUGH_DATA.
    2. Queue a task to fire a simple event named canplaythrough at the media element.
  5. If the media segment contains data beyond the current duration, then run the duration change algorithm with new duration set to the maximum of the current duration and the highest end timestamp reported by HTMLMediaElement.buffered.

3.5.7 Audio Splice Frame Algorithm

Follow these steps when the coded frame processing algorithm needs to generate a splice frame for two overlapping audio coded frames:

  1. Let track buffer be the track buffer that will contain the splice.
  2. Let new coded frame be the new coded frame, that is being added to track buffer, which triggered the need for a splice.
  3. Let presentation timestamp be the presentation timestamp for new coded frame
  4. Let decode timestamp be the decode timestamp for new coded frame.
  5. Let frame duration be the duration of new coded frame.
  6. Let overlapped frame be the coded frame in track buffer that overlaps with new coded frame (ie. it contains presentation timestamp).
  7. Round & update presentation timestamp and decode timestamp to the nearest audio sample timestamp based on sample rate of the audio in overlapped frame.
    Note

    For example, given the following values:

    • The presentation timestamp of overlapped frame equals 10.
    • The sample rate of overlapped frame equals 8000 Hz
    • presentation timestamp equals 10.01255
    • decode timestamp equals 10.01255

    presentation timestamp and decode timestamp are rounded & updated to 10.0125 since 10.01255 is closer to 10 + 100/8000 (10.0125) than 10 + 101/8000 (10.012625)

  8. If the user agent does not support crossfading then run the following steps:
    1. Remove overlapped frame from track buffer.
    2. Add a silence frame to track buffer with the following properties:
      • The presentation time set to the overlapped frame presentation time.
      • The decode time set to the overlapped frame decode time.
      • The frame duration set to difference between presentation timestamp and the overlapped frame presentation time.
    3. Return to caller without providing a splice frame.
      Note

      This is intended to allow new coded frame to be added to the track buffer as if overlapped frame had not been in the track buffer to begin with.

  9. Let frame end timestamp equal the sum of presentation timestamp and frame duration.
  10. Let fade out coded frames equal overlapped frame as well as any addition frames in track buffer that overlap presentation timestamp plus the splice duration of 5 milliseconds.
  11. Remove all the frames included in fade out coded frames from track buffer.
  12. Return a splice frame with the following properties:
    • The presentation time set to the overlapped frame presentation time.
    • The decode time set to the overlapped frame decode time.
    • The frame duration set to difference between frame end timestamp and the overlapped frame presentation time.
    • The fade out coded frames equals fade-out coded frames.
    • The fade in coded frame equals new coded frame.
    • The splice timestamp equals presentation timestamp.
    Note

    See the audio splice rendering algorithm for details on how this splice frame is rendered.

3.5.8 Audio Splice Rendering Algorithm

The following steps are run when a spliced frame, generated by the audio splice frame algorithm, needs to be rendered by the media element:

  1. Let fade out coded frames be the coded frames that are faded out during the splice.
  2. Let fade in coded frames be the coded frames that are faded in during the splice.
  3. Let presentation timestamp be the presentation timestamp of the first coded frame in fade out coded frames.
  4. Let end timestamp be the sum of the presentation timestamp and frame duration in the last frame in fade in coded frames.
  5. Let splice timestamp be the presentation timestamp where the splice starts. This corresponds with the presentation timestamp of the first frame in fade in coded frames.
  6. Let splice end timestamp equal splice timestamp plus five milliseconds.
  7. Let fade out samples be the samples generated by decoding fade out coded frames.
  8. Trim fade out samples so that it only contains samples between presentation timestamp and splice end timestamp.
  9. Let fade in samples be the samples generated by decoding fade in coded frames.
  10. Convert fade out samples and fade in samples to a common sample rate and channel layout.
  11. Let output samples be a buffer to hold the output samples.
  12. Apply a linear gain fade out to the samples between splice timestamp and splice end timestamp in fade out samples.
  13. Apply a linear gain fade in to the samples between splice timestamp and splice end timestamp in fade in samples.
  14. Copy samples between presentation timestamp to splice timestamp from fade out samples into output samples.
  15. For each sample between splice timestamp and splice end timestamp, compute the sum of a sample from fade out samples and the corresponding sample in fade in samples and store the result in output samples.
  16. Copy samples between splice end timestamp to end timestamp from fade in samples into output samples.
  17. Render output samples.
Note

Here is a graphical representation of this algorithm.

Audio splice diagram

4. SourceBufferList Object

SourceBufferList is a simple container object for SourceBuffer objects. It provides read-only array access and fires events when the list is modified.

interface SourceBufferList : EventTarget {
    readonly attribute unsigned long length;
    getter SourceBuffer (unsigned long index);
};

4.1 Attributes

length of type unsigned long, readonly

Indicates the number of SourceBuffer objects in the list.

4.2 Methods

SourceBuffer

Allows the SourceBuffer objects in the list to be accessed with an array operator (i.e. []).

ParameterTypeNullableOptionalDescription
indexunsigned long
Return type: getter

When this method is invoked, the user agent must run the following steps:

  1. If index is greater than or equal to the length attribute then return undefined and abort these steps.
  2. Return the index'th SourceBuffer object in the list.

4.3 Event Summary

Event name Interface Dispatched when...
addsourcebuffer Event When a SourceBuffer is added to the list.
removesourcebuffer Event When a SourceBuffer is removed from the list.

5. URL Object

partial interface URL {
    static DOMString createObjectURL (MediaSource mediaSource);
};

5.1 Methods

createObjectURL, static

Creates URLs for MediaSource objects.

Note

This algorithm is intended to mirror the behavior of the File API createObjectURL() method with autoRevoke set to true.

ParameterTypeNullableOptionalDescription
mediaSourceMediaSource
Return type: DOMString

When this method is invoked, the user agent must run the following steps:

  1. If mediaSource is NULL the return null.
  2. Return a unique MediaSource object URL that can be used to dereference the mediaSource argument, and run the rest of the algorithm asynchronously.
  3. provide a stable state
  4. Revoke the MediaSource object URL by calling revokeObjectURL() on it.

6. HTMLMediaElement attributes

This section specifies what existing attributes on the HTMLMediaElement should return when a MediaSource is attached to the element.

The HTMLMediaElement.seekable attribute returns a new static normalized TimeRanges object created based on the following steps:

If duration equals NaN
Return an empty TimeRanges object.
If duration equals positive Infinity
Return a single range with a start time of 0 and an end time equal to the highest end time reported by the HTMLMediaElement.buffered attribute.
Otherwise
Return a single range with a start time of 0 and an end time equal to duration.

The HTMLMediaElement.buffered attribute returns a new static normalized TimeRanges object created based on the following steps:

  1. If activeSourceBuffers.length equals 0 then return an empty TimeRanges object and abort these steps.
  2. Let active ranges be the ranges returned by buffered for each SourceBuffer object in activeSourceBuffers.
  3. Let highest end time be the largest range end time in the active ranges.
  4. Let intersection ranges equal a TimeRange object containing a single range from 0 to highest end time.
  5. For each SourceBuffer object in activeSourceBuffers run the following steps:
    1. Let source ranges equal the ranges returned by the buffered attribute on the current SourceBuffer.
    2. If readyState is "ended", then set the end time on the last range in source ranges to highest end time.
    3. Let new intersection ranges equal the the intersection between the intersection ranges and the source ranges.
    4. Replace the ranges in intersection ranges with the new intersection ranges.
  6. Return the intersection ranges.

7. AudioTrack Extensions

This section specifies extensions to the HTML AudioTrack definition.

partial interface AudioTrack {
             attribute DOMString     kind;
             attribute DOMString     language;
    readonly attribute SourceBuffer? sourceBuffer;
};

7.1 Attributes

kind of type DOMString

Allows the web application to get and update the track kind.

On getting, return the current value of the attribute. This is either the value provided when this object was created or the value provided on the last successful set operation.

On setting, run the following steps:

  1. If the value being assigned to this attribute does not match one of the kind categories, then abort these steps.
  2. Update this attribute to the new value.
  3. If the sourceBuffer attribute on this track is not null, then queue a task to fire a simple event named change at sourceBuffer.audioTracks.
  4. Queue a task to fire a simple event named change at the audioTracks attribute on the HTMLMediaElement.
language of type DOMString

Allows the web application to get and update the track language.

On getting, return the current value of the attribute. This is either the value provided when this object was created or the value provided on the last successful set operation.

On setting, run the following steps:

  1. If the value being assigned to this attribute is not an empty string or a BCP 47 language tag[BCP47], then abort these steps.
  2. Update this attribute to the new value.
  3. If the sourceBuffer attribute on this track is not null, then queue a task to fire a simple event named change at sourceBuffer.audioTracks.
  4. Queue a task to fire a simple event named change at the audioTracks attribute on the HTMLMediaElement.
sourceBuffer of type SourceBuffer, readonly, nullable

Returns the SourceBuffer that created this track. Returns null if this track was not created by a SourceBuffer or the SourceBuffer has been removed from the sourceBuffers attribute of its parent media source.

8. VideoTrack Extensions

This section specifies extensions to the HTML VideoTrack definition.

partial interface VideoTrack {
             attribute DOMString     kind;
             attribute DOMString     language;
    readonly attribute SourceBuffer? sourceBuffer;
};

8.1 Attributes

kind of type DOMString

Allows the web application to get and update the track kind.

On getting, return the current value of the attribute. This is either the value provided when this object was created or the value provided on the last successful set operation.

On setting, run the following steps:

  1. If the value being assigned to this attribute does not match one of the kind categories, then abort these steps.
  2. Update this attribute to the new value.
  3. If the sourceBuffer attribute on this track is not null, then queue a task to fire a simple event named change at sourceBuffer.videoTracks.
  4. Queue a task to fire a simple event named change at the videoTracks attribute on the HTMLMediaElement.
language of type DOMString

Allows the web application to get and update the track language.

On getting, return the current value of the attribute. This is either the value provided when this object was created or the value provided on the last successful set operation.

On setting, run the following steps:

  1. If the value being assigned to this attribute is not an empty string or a BCP 47 language tag[BCP47], then abort these steps.
  2. Update this attribute to the new value.
  3. If the sourceBuffer attribute on this track is not null, then queue a task to fire a simple event named change at sourceBuffer.videoTracks.
  4. Queue a task to fire a simple event named change at the videoTracks attribute on the HTMLMediaElement.
sourceBuffer of type SourceBuffer, readonly, nullable

Returns the SourceBuffer that created this track. Returns null if this track was not created by a SourceBuffer or the SourceBuffer has been removed from the sourceBuffers attribute of its parent media source.

9. TextTrack Extensions

This section specifies extensions to the HTML TextTrack definition.

partial interface TextTrack {
             attribute DOMString     kind;
             attribute DOMString     language;
    readonly attribute SourceBuffer? sourceBuffer;
};

9.1 Attributes

kind of type DOMString

Allows the web application to get and update the track kind.

On getting, return the current value of the attribute. This is either the value provided when this object was created or the value provided on the last successful set operation.

On setting, run the following steps:

  1. If the value being assigned to this attribute does not match one of the text track kinds, then abort these steps.
  2. Update this attribute to the new value.
  3. If the sourceBuffer attribute on this track is not null, then queue a task to fire a simple event named change at sourceBuffer.textTracks.
  4. Queue a task to fire a simple event named change at the textTracks attribute on the HTMLMediaElement.
language of type DOMString

Allows the web application to get and update the track language.

On getting, return the current value of the attribute. This is either the value provided when this object was created or the value provided on the last successful set operation.

On setting, run the following steps:

  1. If the value being assigned to this attribute is not an valid text track language, then abort these steps.
  2. Update this attribute to the new value.
  3. If the sourceBuffer attribute on this track is not null, then queue a task to fire a simple event named change at sourceBuffer.textTracks.
  4. Queue a task to fire a simple event named change at the textTracks attribute on the HTMLMediaElement.
sourceBuffer of type SourceBuffer, readonly, nullable

Returns the SourceBuffer that created this track. Returns null if this track was not created by a SourceBuffer or the SourceBuffer has been removed from the sourceBuffers attribute of its parent media source.

10. Byte Stream Formats

The bytes provided through appendBuffer() and appendStream() for a SourceBuffer form a logical byte stream. The format of this byte stream depends on the media container format in use and is defined in a byte stream format specification. Byte stream format specifications based on WebM , the ISO Base Media File Format, and MPEG-2 Transport Streams are provided below. These format specifications are intended to be the authoritative source for how data from these containers is formatted and passed to a SourceBuffer. If a MediaSource implementation claims to support any of these container formats, then it must implement the corresponding byte stream format specification described below.

This section provides general requirements for all byte stream formats:

Byte stream specifications must at a minimum define constraints which ensure that the above requirements hold. Additional constraints may be defined, for example to simplify implementation.

10.1 WebM Byte Streams

This section defines segment formats for implementations that choose to support WebM.

10.1.1 Initialization Segments

A WebM initialization segment must contain a subset of the elements at the start of a typical WebM file.

The following rules apply to WebM initialization segments:

  1. The initialization segment must start with an EBML Header element, followed by a Segment header.
  2. The size value in the Segment header must signal an "unknown size" or contain a value large enough to include the Segment Information and Tracks elements that follow.
  3. A Segment Information element and a Tracks element must appear, in that order, after the Segment header and before any further EBML Header or Cluster elements.
  4. Any elements other than an EBML Header or a Cluster that occur before, in between, or after the Segment Information and Tracks elements are ignored.

10.1.2 Media Segments

A WebM media segment is a single Cluster element.

The following rules apply to WebM media segments:

  1. The Timecode element in the Cluster contains a presentation timestamp in TimecodeScale units.
  2. The TimecodeScale in the WebM initialization segment most recently appended applies to all timestamps in the Cluster
  3. The Cluster header may contain an "unknown" size value. If it does then the end of the cluster is reached when another Cluster header or an element header that indicates the start of an WebM initialization segment is encountered.
  4. Block & SimpleBlock elements must be in time increasing order consistent with the WebM spec.
  5. If the most recent WebM initialization segment describes multiple tracks, then blocks from all the tracks must be interleaved in time increasing order. At least one block from all audio and video tracks must be present.
  6. Cues or Chapters elements may follow a Cluster element. These elements must be accepted and ignored by the user agent.

10.1.3 Random Access Points

A SimpleBlock element with its Keyframe flag set signals the location of a random access point for that track. Media segments containing multiple tracks are only considered a random access point if the first SimpleBlock for each track has its Keyframe flag set. The order of the multiplexed blocks must conform to the WebM Muxer Guidelines.

10.2 ISO Base Media File Format Byte Streams

This section defines segment formats for implementations that choose to support the ISO Base Media File Format ISO/IEC 14496-12 (ISO BMFF).

10.2.1 Initialization Segments

An ISO BMFF initialization segment is defined in this specification as a single Movie Header Box (moov). The tracks in the Movie Header Box must not contain any samples (i.e. the entry_count in the stts, stsc and stco boxes must be set to zero). A Movie Extends (mvex) box must be contained in the Movie Header Box to indicate that Movie Fragments are to be expected.

The initialization segment may contain Edit Boxes (edts) which provide a mapping of composition times for each track to the global presentation time.

Valid top-level boxes such as ftyp, styp, and sidx are allowed to appear before the moov box. These boxes must be accepted and ignored by the user agent and are not considered part of the initialization segment in this specification.

10.2.2 Media Segments

An ISO BMFF media segment is defined in this specification as a single Movie Fragment Box (moof) followed by one or more Media Data Boxes (mdat).

Valid top-level boxes defined in ISO/IEC 14496-12 other than moov, moof, and mdat are allowed to appear between the end of an initialization segment or media segment and before the beginning of a new media segment. These boxes must be accepted and ignored by the user agent and are not considered part of the media segment in this specification.

The following rules apply to ISO BMFF media segments:

  1. The Movie Fragment Box must contain at least one Track Fragment Box (traf).
  2. The Movie Fragment Box must use movie-fragment relative addressing and the flag default-base-is-moof must be set; absolute byte-offsets must not be used.
  3. External data references must not be used.
  4. If the Movie Fragment contains multiple tracks, the duration by which each track extends should be as close to equal as practical.
  5. Each Track Fragment Box must contain a Track Fragment Decode Time Box (tfdt)
  6. The Media Data Boxes must contain all the samples referenced by the Track Fragment Run Boxes (trun) of the Movie Fragment Box.

10.2.3 Random Access Points

A random access point as defined in this specification corresponds to a Stream Access Point of type 1 or 2 as defined in Annex I of ISO/IEC 14496-12.

10.3 MPEG-2 Transport Stream Byte Streams

This section defines segment formats for implementations that choose to support MPEG-2 Transport Streams (MPEG-2 TS) specified in ISO/IEC 13818-1.

10.3.1 General

MPEG-2 TS media and initialization segments must conform to the MPEG-2 TS Adaptive Profile (ISO/IEC 13818-1:2012 Amd. 2).

The following rules must apply to all MPEG-2 TS segments:

  1. Segments must contain complete MPEG-2 TS packets.
  2. Segments must contain only complete PES packets and sections.
  3. Segments must contain exactly one program.
  4. All MPEG-2 TS packets must have the transport_error_indicator set to 0

10.3.2 Initialization Segments

An MPEG-2 TS initialization segment must contain a single PAT and a single PMT. Other SI, such as CAT, that are invariant for all subsequent media segments, may be present.

10.3.3 Media Segments

The following rules apply to all MPEG-2 TS media segments:

  1. PSI that is identical to the information in the initialization segment may appear repeatedly throughout the segment.
  2. The media segment will not rely on initialization information in another media segment.
  3. Media Segments must contain only complete PES packets and sections.
  4. Each PES packet must have a PTS timestamp.
  5. PCR must be present in the Segment prior to the first byte of a TS packet payload containing media data.
  6. The presentation duration of each media component within the Media Segment should be as close to equal as practical.

10.3.4 Random Access Points

A random access point as defined in this specification corresponds to Elementary Stream Random Access Point as defined in ISO/IEC 13818-1.

10.3.5 Timestamp Rollover & Discontinuities

Timestamp rollovers and discontinuities must be handled by the UA. The UA's MPEG-2 TS implementation must maintain an internal offset variable, MPEG2TS_timestampOffset, to keep track of the offset that needs to be applied to timestamps that have rolled over or are part of a discontinuity. MPEG2TS_timestampOffset is initially set to 0 when the SourceBuffer is created. This offset must be applied to the timestamps as part of the conversion process from MPEG-2 TS packets into coded frames for the coded frame processing algorithm. This results in the coded frame timestamps for a packet being computed by the following equations:

Coded Frame Presentation Timestamp = (MPEG-2 TS presentation timestamp) + MPEG2TS_timestampOffset
Coded Frame Decode Timestamp = (MPEG-2 TS decode timestamp) + MPEG2TS_timestampOffset

MPEG2TS_timestampOffset is updated in the following ways:

  • Each time a timestamp rollover is detected, 2^33 must be added to MPEG2TS_timestampOffset.
  • When a discontinuity is detected, MPEG2TS_timestampOffset must be adjusted to make the timestamps after the discontinuity appear to come immediately after the timestamps before the discontinuity.
  • When abort() is called, MPEG2TS_timestampOffset must be set to 0.
  • When timestampOffset is successfully set, MPEG2TS_timestampOffset must be set to 0.

11. Examples

Example use of the Media Source Extensions

<script>
  function onSourceOpen(videoTag, e) {
    var mediaSource = e.target;
    var sourceBuffer = mediaSource.addSourceBuffer('video/webm; codecs="vorbis,vp8"');

    videoTag.addEventListener('seeking', onSeeking.bind(videoTag, mediaSource));
    videoTag.addEventListener('progress', onProgress.bind(videoTag, mediaSource));

    var initSegment = GetInitializationSegment();

    if (initSegment == null) {
      // Error fetching the initialization segment. Signal end of stream with an error.
      mediaSource.endOfStream("network");
      return;
    }

    // Append the initialization segment.
    var firstAppendHandler = function(e) {
      var sourceBuffer = e.target;
      sourceBuffer.removeEventListener('updateend', firstAppendHandler);

      // Append some initial media data.
      appendNextMediaSegment(mediaSource);
    };
    sourceBuffer.addEventListener('updateend', firstAppendHandler);
    sourceBuffer.appendBuffer(initSegment);
  }

  function appendNextMediaSegment(mediaSource) {
    if (mediaSource.readyState == "ended")
      return;

    // If we have run out of stream data, then signal end of stream.
    if (!HaveMoreMediaSegments()) {
      mediaSource.endOfStream();
      return;
    }

    // Make sure the previous append is not still pending.
    if (mediaSource.sourceBuffers[0].updating)
        return;

    var mediaSegment = GetNextMediaSegment();

    if (!mediaSegment) {
      // Error fetching the next media segment.
      mediaSource.endOfStream("network");
      return;
    }

    mediaSource.sourceBuffers[0].appendBuffer(mediaSegment);
  }

  function onSeeking(mediaSource, e) {
    var video = e.target;

    // Abort current segment append.
    mediaSource.sourceBuffers[0].abort();

    // Notify the media segment loading code to start fetching data at the
    // new playback position.
    SeekToMediaSegmentAt(video.currentTime);

    // Append a media segment from the new playback position.
    appendNextMediaSegment(mediaSource);
  }

  function onProgress(mediaSource, e) {
    appendNextMediaSegment(mediaSource);
  }
</script>

<video id="v" autoplay> </video>

<script>
  var video = document.getElementById('v');
  var mediaSource = new MediaSource();
  mediaSource.addEventListener('sourceopen', onSourceOpen.bind(this, video));
  video.src = window.URL.createObjectURL(mediaSource);
</script>
          

12. Revision History

Version Comment
05 March 2013
  • Bug 21170 - Remove 'stream aborted' step from stream append loop algorithm.
  • Bug 21171 - Added informative note about when addSourceBuffer() might throw an QUOTA_EXCEEDED_ERR exception.
  • Bug 20901 - Add support for 'continuation' and 'timestampOffset' abort modes.
  • Bug 21159 - Rename appendArrayBuffer to appendBuffer() and add ArrayBufferView overload.
  • Bug 21198 - Remove redundant 'closed' readyState checks.
25 February 2013
  • Remove Source Buffer Model section since all the behavior is covered by the algorithms now.
  • Bug 20899 - Remove media segments must start with a random access point requirement.
  • Bug 21065 - Update example code to use updating attribute instead of old appending attribute.
19 February 2013
  • Bug 19676, 20327 - Provide more detail for audio & video splicing.
  • Bug 20900 - Remove complete access unit constraint.
  • Bug 20948 - Setting timestampOffset in 'ended' triggers a transition to 'open'
  • Bug 20952 - Added update event.
  • Bug 20953 - Move end of append event firing out of segment parser loop.
  • Bug 21034 - Add steps to fire addtrack and removetrack events.
05 February 2013
  • Bug 19676 - Added a note clarifying that the internal timestamp representation doesn't have to be a double.
  • Added steps to the coded frame processing algorithm to remove old frames when new ones overlap them.
  • Fix isTypeSupported() return type.
  • Bug 18933 - Clarify what top-level boxes to ignore for ISO-BMFF.
  • Bug 18400 - Add a check to avoid creating huge hidden gaps when out-of-order appends occur w/o calling abort().
31 January 2013
  • Make remove() asynchronous.
  • Added steps to various algorithms to throw an INVALID_STATE_ERR exception when async appends or remove() are pending.
30 January 2013
  • Remove early abort step on 0-byte appends so the same events fire as a normal append with bytes.
  • Added definition for 'enough data to ensure uninterrupted playback'.
  • Updated buffered ranges algorithm to properly compute the ranges for Philip's example.
15 January 2013 Replace setTrackInfo() and getSourceBuffer() with AudioTrack, VideoTrack, and TextTrack extensions.
04 January 2013
  • Renamed append() to appendArrayBuffer() and made appending asynchronous.
  • Added SourceBuffer.appendStream().
  • Added SourceBuffer.setTrackInfo() methods.
  • Added issue boxes to relevant sections for outstanding bugs.
14 December 2012 Pubrules, Link Checker, and Markup Validation fixes.
13 December 2012
  • Added MPEG-2 Transport Stream section.
  • Added text to require abort() for out-of-order appends.
  • Renamed "track buffer" to "decoder buffer".
  • Redefined "track buffer" to mean the per-track buffers that hold the SourceBuffer media data.
  • Editorial fixes.
08 December 2012
  • Added MediaSource.getSourceBuffer() methods.
  • Section 2 cleanup.
06 December 2012
  • append() now throws a QUOTA_EXCEEDED_ERR when the SourceBuffer is full.
  • Added unique ID generation text to Initialization Segment Received algorithm.
  • Remove 2.x subsections that are already covered by algorithm text.
  • Rework byte stream format text so it doesn't imply that the MediaSource implementation must support all formats supported by the HTMLMediaElement.
28 November 2012
  • Added transition to HAVE_METADATA when current playback position is removed.
  • Added remove() calls to duration change algorithm.
  • Added MediaSource.isTypeSupported() method.
  • Remove initialization segments are optional text.
09 November 2012 Converted document to ReSpec.
18 October 2012 Refactored SourceBuffer.append() & added SourceBuffer.remove().
8 October 2012
  • Defined what HTMLMediaElement.seekable and HTMLMediaElement.buffered should return.
  • Updated seeking algorithm to run inside Step 10 of the HTMLMediaElement seeking algorithm.
  • Removed transition from "ended" to "open" in the seeking algorithm.
  • Clarified all the event targets.
1 October 2012 Fixed various addsourcebuffer & removesourcebuffer bugs and allow append() in ended state.
13 September 2012 Updated endOfStream() behavior to change based on the value of HTMLMediaElement.readyState.
24 August 2012
  • Added early abort on to duration change algorithm.
  • Added createObjectURL() IDL & algorithm.
  • Added Track ID & Track description definitions.
  • Rewrote start overlap for audio frames text.
  • Removed rendering silence requirement from section 2.5.
22 August 2012
  • Clarified WebM byte stream requirements.
  • Clarified SourceBuffer.buffered return value.
  • Clarified addsourcebuffer & removesourcebuffer event targets.
  • Clarified when media source attaches to the HTMLMediaElement.
  • Introduced duration change algorithm and update relevant algorithms to use it.
17 August 2012 Minor editorial fixes.
09 August 2012 Change presentation start time to always be 0 instead of using format specific rules about the first media segment appended.
30 July 2012 Added SourceBuffer.timestampOffset and MediaSource.duration.
17 July 2012 Replaced SourceBufferList.remove() with MediaSource.removeSourceBuffer().
02 July 2012 Converted to the object-oriented API
26 June 2012 Converted to Editor's draft.
0.5 Minor updates before proposing to W3C HTML-WG.
0.4 Major revision. Adding source IDs, defining buffer model, and clarifying byte stream formats.
0.3 Minor text updates.
0.2 Updates to reflect initial WebKit implementation.
0.1 Initial Proposal

A. References

A.1 Informative references

[BCP47]
A. Phillips; M. Davis. Tags for Identifying Languages September 2009. IETF Best Current Practice. URL: http://tools.ietf.org/html/bcp47