This document defines a set of WebIDL objects that allow access to the statistical information about a {{RTCPeerConnection}}.
These objects are returned from the getStats API that is specified in [[WEBRTC]].
Since the previous publication as a Candidate Recommendation, the stats objects were significantly reorganized to better match the underlying data sources. In addition, the networkType
property was deprecated for preserving privacy, and the statsended
event was removed as no longer needed.
Audio, video, or data packets transmitted over a peer-connection can be lost, and experience varying amounts of network delay. A web application implementing WebRTC expects to monitor the performance of the underlying network and media pipeline.
This document defines the statistic identifiers used by the web application to extract metrics from the user agent.
This specification defines the conformance criteria that applies to a single product: the user agent.
Implementations that use ECMAScript to implement the objects defined in this specification MUST implement them in a manner consistent with the ECMAScript Bindings defined in the Web IDL specification [[WEBIDL]], as this document uses that specification and terminology.
This specification does not define what objects a conforming implementation should generate. Specifications that refer to this specification have the need to specify conformance. They should put in their document text like this (EXAMPLE ONLY):
The terms {{RTCPeerConnection}}, {{RTCDataChannel}}, {{RTCDtlsTransport}}, {{RTCDtlsTransportState}}, {{RTCIceTransport}}, {{RTCIceRole}}, {{RTCIceTransportState}}, {{RTCDataChannelState}}, {{RTCIceCandidateType}}, {{RTCStats}}, {{RTCCertificate}} are defined in [[!WEBRTC]].
RTCPriorityType is defined in [[WEBRTC-PRIORITY]].
The term RTP stream is defined in [[RFC7656]].
The terms Synchronization Source (SSRC), RTCP Sender Report (SR), RTCP Receiver Report (RR) are defined in [[RFC3550]].
The term RTCP Extended Report (XR) is defined in [[RFC3611]].
An audio sample refers to having a sample in any channel of an audio track - if multiple audio channels are used, metrics based on samples do not increment at a higher rate, simultaneously having samples in multiple channels counts as a single sample.
The basic object of the stats model is the [= stats object =]. The following terms are defined to describe it:
An internal object that keeps a set of data values. Most monitored objects are object defined in the WebRTC API; they may be thought of as being internal properties of those objects.
A monitored object has a stable identifier id, which is reflected in all stats objects produced from the monitored object. Stats objects may contain references to other stats objects using this id value. In a [= stats object =], these references are represented by a {{DOMString}} containing id value of the referenced stats object.
All stats object references have type {{DOMString}} and member names ending in Id
, or
they have type sequence<{{DOMString}}>
and member names ending in Ids
.
A monitored object changes the values it contains continuously over its lifetime, but is never visible through the getStats API call. A stats object, once returned, never changes.
The stats API is defined in [[!WEBRTC]]. It is defined to return a collection of [= stats object =]s, each of which is a dictionary inheriting directly or indirectly from the RTCStats dictionary. This API is normatively defined in [[!WEBRTC]], but is reproduced here for ease of reference.
dictionary RTCStats { required DOMHighResTimeStamp timestamp; required RTCStatsType type; required DOMString id; };
Timestamps are expressed with {{DOMHighResTimeStamp}} [[HIGHRES-TIME]], and are defined as {{Performance.timeOrigin}} + {{Performance.now()}} at the time the information is collected.
When introducing a new stats object, the following principles should be followed:
The new members of the stats dictionary need to be named according to standard practice (camelCase), as per [[API-DESIGN-PRINCIPLES]].
Names ending in Id
(such as {{RTCRtpStreamStats/transportId}}) are always a [= stats object reference =];
names ending in Ids
are always of type sequence<{{DOMString}}>, where
each {{DOMString}} is a [= stats object reference =].
If the natural name for a stats value would end in id
(such as when the stats value is
an in-protocol identifier for the monitored object), the recommended practice is to let
the name end in identifier
, such as {{RTCDataChannelStats/dataChannelIdentifier}}.
Stats are sampled by Javascript. In general, an application will not have overall control over how often stats are sampled, and the implementation cannot know what the intended use of the stats is. There is, by design, no control surface for the application to influence how stats are generated.
Therefore, letting the implementation compute "average" rates is not a good idea, since that implies some averaging time interval that can't be set beforehand. Instead, the recommended approach is to count the number of measurements of a value and sum the measurements given even if the sum is meaningless in itself; the JS application can then compute averages over any desired time interval by calling getStats() twice, taking the difference of the two sums and dividing by the difference of the two counts.
For stats that are measured against time, such as byte counts, no separate counter is needed; one can instead divide by the difference in the timestamps.
When implementing stats objects, the following guidelines should be adhered to:
The object descriptions will say what the lifetime of a [= monitored object =] from the perspective of stats is. When a monitored object is deleted, it no longer appears in stats; until this happens, it will appear. This may or may not correspond to the actual lifetime of an object in an implementation; what matters for this specification is what appears in stats.
If a monitored object can only exist in a few instances over the lifetime of a {{RTCPeerConnection}}, it may be simplest to consider it "eternal" and never delete it from the set of objects reported on in stats. This type of object will remain visible until the {{RTCPeerConnection}} is no longer available; it is also visible in {{RTCPeerConnection/getStats()}} after pc.close(). This is the default when no lifetime is mentioned in its specification.
Objects that might exist in many instances over time should have a defined time at which they are [= deleted =], at which time they stop appearing in subsequent calls to {{RTCPeerConnection/getStats()}}. When an object is [= deleted =], we can guarantee that no subsequent {{RTCPeerConnection/getStats()}} call will contain a [= stats object reference =] that references the deleted object. We also guarantee that the stats id of the deleted object will never be reused for another object. This ensures that an application that collects [= stats object =]s for deleted [= monitored object =]s will always be able to uniquely identify the object pointed to in the result of any {{RTCPeerConnection/getStats()}} call.
A call to {{RTCPeerConnection/getStats()}} touches many components of WebRTC and may take significant time to execute. The implementation may or may not utilize caching or throttling of {{RTCPeerConnection/getStats()}} calls for performance benefits, however any implementation must adhere to the following:
When the state of the {{RTCPeerConnection}} visibly changes as a result of an API call, a promise resolving or an event firing, subsequent new {{RTCPeerConnection/getStats()}} calls must return up-to-date dictionaries for the affected objects.
When a stats object is [= deleted =], subsequent {{RTCPeerConnection/getStats()}} calls MUST NOT return stats for that [= monitored object =].
This document, in its editors' draft form, serves as the repository for the currently defined set of stats object types including proposals for new standard types.
This document specifies the interoperable stats object types. Proposals for new object types may be made in the editors draft maintained on GitHub. New standard types may appear in future revisions of the W3C Recommendation.
If a need for a new stats object type or stats value within a stats object is found, an issue should be raised on Github, and a review process will decide on whether the stat should be added to the editors draft or not.
A pull request for a change to the editors draft may serve as guidance for the discussion, but the eventual merge is dependent on the review process.
While the WebRTC WG exists, it will serve as the review body; once it has disbanded, the W3C will have to establish appropriate review.
The level of review sought is that of the IETF process' "expert review", as defined in [[RFC5226]] section 4.1. The documentation needed includes the names of the new stats, their data types, and the definitions they are based on, specified to a level that allows interoperable implementation. The specification may consist of references to other documents.
Another specification that wishes to refer to a specific version (for instance for conformance) should refer to a dated version; these will be produced regularly when updates happen.
The WebRTC's Statistics API exposes information about the system, including hardware capabilities and network characteristics. To limit the finger printing surface imposed by this API, some metrics are only exposed if allowed by the algorithms in this section.
To avoid passive fingerprinting, hardware capabilities should only be exposed in capturing contexts. This is tested using the algorithm below.
To check if hardware exposure is allowed, run the following steps:
If the context capturing state is true, return true.
Otherwise return false.
The {{RTCStats/type}} member, of type {{RTCStatsType}}, indicates the type of the object that the {{RTCStats}} object represents. An object with a given {{RTCStats/type}} can have only one IDL dictionary type, but multiple {{RTCStats/type}} values may indicate the same IDL dictionary type; for example, {{RTCStatsType/"local-candidate"}} and {{RTCStatsType/"remote-candidate"}} both use the IDL dictionary type {{RTCIceCandidateStats}}.
This specification is normative for the allowed values of {{RTCStatsType}}.
enum RTCStatsType { "codec", "inbound-rtp", "outbound-rtp", "remote-inbound-rtp", "remote-outbound-rtp", "media-source", "media-playout", "peer-connection", "data-channel", "transport", "candidate-pair", "local-candidate", "remote-candidate", "certificate" };
The following strings are valid values for {{RTCStatsType}}:
Statistics for a codec that is currently being used by RTP streams being sent or received by this {{RTCPeerConnection}} object. It is accessed by the {{RTCCodecStats}}.
Statistics for an inbound RTP stream that is currently received with this {{RTCPeerConnection}} object. It is accessed by the {{RTCInboundRtpStreamStats}}.
RTX streams do not show up as separate {{RTCInboundRtpStreamStats}} objects but affect the {{RTCReceivedRtpStreamStats/packetsReceived}}, {{RTCInboundRtpStreamStats/bytesReceived}}, {{RTCInboundRtpStreamStats/retransmittedPacketsReceived}} and {{RTCInboundRtpStreamStats/retransmittedBytesReceived}} counters of the relevant {{RTCInboundRtpStreamStats}} objects.
FEC streams do not show up as separate {{RTCInboundRtpStreamStats}} objects but affect the {{RTCReceivedRtpStreamStats/packetsReceived}}, {{RTCInboundRtpStreamStats/bytesReceived}}, {{RTCInboundRtpStreamStats/fecPacketsReceived}} and {{RTCInboundRtpStreamStats/fecBytesReceived}} counters of the relevant {{RTCInboundRtpStreamStats}} objects.
Statistics for an outbound RTP stream that is currently sent with this {{RTCPeerConnection}} object. It is accessed by the {{RTCOutboundRtpStreamStats}}.
When there are multiple RTP streams connected to the same sender due to using simulcast, there will be one {{RTCOutboundRtpStreamStats}} per RTP stream, with distinct values of the {{RTCRtpStreamStats/ssrc}} member. RTX streams do not show up as separate {{RTCOutboundRtpStreamStats}} objects but affect the {{RTCSentRtpStreamStats/packetsSent}}, {{RTCSentRtpStreamStats/bytesSent}}, {{RTCOutboundRtpStreamStats/retransmittedPacketsSent}} and {{RTCOutboundRtpStreamStats/retransmittedBytesSent}} counters of the relevant {{RTCOutboundRtpStreamStats}} objects.
Statistics for the remote endpoint's inbound RTP stream corresponding to an outbound stream that is currently sent with this {{RTCPeerConnection}} object. It is measured at the remote endpoint and reported in an RTCP Receiver Report (RR) or RTCP Extended Report (XR). It is accessed by the {{RTCRemoteInboundRtpStreamStats}}.
Statistics for the remote endpoint's outbound RTP stream corresponding to an inbound stream that is currently received with this {{RTCPeerConnection}} object. It is measured at the remote endpoint and reported in an RTCP Sender Report (SR). It is accessed by the {{RTCRemoteOutboundRtpStreamStats}}.
Statistics for the media produced by a {{MediaStreamTrack}} that is
currently attached to an {{RTCRtpSender}}. This reflects the media that is
fed to the encoder; after getUserMedia()
constraints have been applied
(i.e. not the raw media produced by the camera). It is either an
{{RTCAudioSourceStats}} or {{RTCVideoSourceStats}}
depending on its kind
.
Statistics related to audio playout. It is accessed by the {{RTCAudioPlayoutStats}}.
Statistics related to the {{RTCPeerConnection}} object. It is accessed by the {{RTCPeerConnectionStats}}.
Statistics related to each {{RTCDataChannel}} id. It is accessed by the {{RTCDataChannelStats}}.
Transport statistics related to the {{RTCPeerConnection}} object. It is accessed by the {{RTCTransportStats}}.
ICE candidate pair statistics related to the {{RTCIceTransport}} objects. It is accessed by the {{RTCIceCandidatePairStats}}.
A candidate pair that is not the current pair for a transport is [= deleted =] when the {{RTCIceTransport}} does an ICE restart, at the time the state changes to {{RTCIceTransportState/"new"}}. The candidate pair that is the current pair for a transport is [= deleted =] after an ICE restart when the {{RTCIceTransport}} switches to using a candidate pair generated from the new candidates; this time doesn't correspond to any other externally observable event.
ICE local candidate statistics related to the {{RTCIceTransport}} objects. It is accessed by the {{RTCIceCandidateStats}} for the local candidate.
A local candidate is [= deleted =] when the {{RTCIceTransport}} does an ICE restart, and the candidate is no longer a member of any non-deleted candidate pair.
ICE remote candidate statistics related to the {{RTCIceTransport}} objects. It is accessed by the {{RTCIceCandidateStats}} for the remote candidate.
A remote candidate is [= deleted =] when the {{RTCIceTransport}} does an ICE restart, and the candidate is no longer a member of any non-deleted candidate pair.
Information about a certificate used by an {{RTCIceTransport}}. It is accessed by the {{RTCCertificateStats}}.
The dictionaries for RTP statistics are structured as a hierarchy, so that those stats that make sense in many different contexts are represented just once in IDL.
The metrics exposed here correspond to local measurements and those reported by RTCP packets. Compound RTCP packets contain multiple RTCP report blocks, such as Sender Report (SR) and Receiver Report (RR) whereas a non-compound RTCP packets may contain just a single RTCP SR or RR block.
The lifetime of all RTP [= monitored object =]s starts when the RTP stream is first used: When the first RTP packet is sent or received on the SSRC it represents, or when the first RTCP packet is sent or received that refers to the SSRC of the RTP stream.
RTP monitored objects are deleted when the corresponding RTP sender or
RTP receiver is reconfigured to remove the corresponding RTP stream.
This happens for the old SSRC when the {{RTCRtpStreamStats/ssrc}}
changes, a simulcast layer is dropped or the {{RTCRtpTransceiver}}'s
{{RTCRtpTransceiver/currentDirection}} becomes "stopped"
.
The monitored object is not deleted if the transceiver is made
"inactive"
or if the encoding's
{{RTCRtpEncodingParameters/active}} parameter is set to
false
. If an SSRC is recycled after a deletion event has
happened, this is considered a new RTP monitored object and the new
RTP stream stats will have reset counters and a new ID.
For a given RTP stats object, its total counters must always increase, but due to changes in SSRC, simulast layers dropping or transceivers stopping, an RTP stats object can be deleted and/or replaced by a new RTP stats object. The caller will need to be aware of this when aggregating packet counters accross multiple RTP stats objects (the aggregates may decrease due to deletions).
An RTCRtpSender is sending two layers of simulcast (on SSRC=111 and SSRC=222). Two "outbound-rtp" stats objects are observed, one with SSRC=111, and the other with SSRC=222. Both objects' packet counters are increasing. One layer is inactivated using RTCRtpSender.setParameters(). While this pauses one of the layers (its packet counter freezes), the RTP monitored objects are not deleted. The RTCRtpTransceiver is negotiated as "inactive" and the RTP monitored objects are still not deleted. When the RTCRtpTransceiver becomes "sendonly" again, the same "outbound-rtp" objects continue to be used. Later, RTCRtpTransceiver.stop() is called. The "outbound-rtp" objects still exist but their packet counters have frozen. Renegotiation happens and the transceiver.currentDirection becomes "stopped", now both "outbound-rtp" objects have been deleted.
The hierarchy is as follows:
{{RTCRtpStreamStats}}: Stats that apply to any end of any RTP stream
{{RTCReceivedRtpStreamStats}}: Stats measured at the receiving end of an RTP stream, known either because they're measured locally or transmitted via an RTCP Receiver Report (RR) or Extended Report (XR) block.
{{RTCInboundRtpStreamStats}}: Stats that can only be measured at the local receiving end of an RTP stream.
{{RTCRemoteInboundRtpStreamStats}}: Stats relevant to the remote receiving end of an RTP stream - usually computed by combining local data with data received via an RTCP RR or XR block.
{{RTCSentRtpStreamStats}}: Stats measured at the sending end of an RTP stream, known either because they're measured locally or because they're received via RTCP, usually in an RTCP Sender Report (SR).
{{RTCRemoteOutboundRtpStreamStats}}: Stats relevant to the remote sending end of an RTP stream, usually computed based on an RTCP SR.
dictionary RTCRtpStreamStats : RTCStats { required unsigned long ssrc; required DOMString kind; DOMString transportId; DOMString codecId; };
The synchronization source (SSRC) identifier is an unsigned integer value per [[RFC3550]] used to identify the stream of RTP packets that this stats object is describing.
For outbound and inbound local, SSRC describes the stats for the RTP stream that were sent and received, respectively by those endpoints. For the remote inbound and remote outbound, SSRC describes the stats for the RTP stream that were received by and sent to the remote endpoint.
Either "audio
" or "video
". This
MUST match the kind
attribute of the related {{MediaStreamTrack}}.
It is a unique identifier that is associated to the object that was inspected to produce the {{RTCTransportStats}} associated with this RTP stream.
It is a unique identifier that is associated to the object that was inspected to produce the {{RTCCodecStats}} associated with this RTP stream.
Codecs are created when registered for an RTP transport,
but only the subset of codecs that are in use (referenced by an RTP
stream) are exposed in getStats()
.
The {{RTCCodecStats}} object is created when one or more {{RTCRtpStreamStats/codecId}} references the codec. When there no longer exists any reference to the {{RTCCodecStats}}, the stats object is deleted. If the same codec is used again in the future, the {{RTCCodecStats}} object is revived with the same {{RTCStats/id}} as before.
Codec objects may be referenced by multiple RTP streams in media sections using the same transport, but similar codecs in different transports have different {{RTCCodecStats}} objects.
User agents are expected to coalesce information into a single
"codec"
entry per payload type per transport, unless
{{RTCCodecStats/sdpFmtpLine}} differs per direction, in which case two
entries (one for encode and one for decode) are needed.
dictionary RTCCodecStats : RTCStats { required unsigned long payloadType; required DOMString transportId; required DOMString mimeType; unsigned long clockRate; unsigned long channels; DOMString sdpFmtpLine; };
Payload type as used in RTP encoding or decoding.
The unique identifier of the transport on which this codec is being used, which can be used to look up the corresponding {{RTCTransportStats}} object.
The codec MIME media type/subtype defined in the IANA media types registry [[!IANA-MEDIA-TYPES]], e.g. video/VP8.
Represents the media sampling rate.
When present, indicates the number of channels (mono=1, stereo=2).
The "format specific parameters" field from the
a=fmtp
line in the SDP
corresponding to the codec, if one [= map/exist =]s,
as defined by
[[!RFC8829]] (section 5.8).
dictionary RTCReceivedRtpStreamStats : RTCRtpStreamStats { unsigned long long packetsReceived; long long packetsLost; double jitter; };
Total number of RTP packets received for this SSRC. This includes retransmissions. At the receiving endpoint, this is calculated as defined in [[!RFC3550]] section 6.4.1. At the sending endpoint the {{packetsReceived}} is estimated by subtracting the Cumulative Number of Packets Lost from the Extended Highest Sequence Number Received, both reported in the RTCP Receiver Report, and then subtracting the initial Extended Sequence Number that was sent to this SSRC in a RTCP Sender Report and then adding one, to mirror what is discussed in Appendix A.3 in [[!RFC3550]], but for the sender side. If no RTCP Receiver Report has been received yet, then return 0.
Total number of RTP packets lost for this SSRC. Calculated as defined in [[!RFC3550]] section 6.4.1. Note that because of how this is estimated, it can be negative if more packets are received than sent.
Packet Jitter measured in seconds for this SSRC. Calculated as defined in section 6.4.1. of [[!RFC3550]].
The RTCInboundRtpStreamStats dictionary represents the measurement metrics for the incoming RTP media stream. The timestamp reported in the statistics object is the time at which the data was sampled.
dictionary RTCInboundRtpStreamStats : RTCReceivedRtpStreamStats { required DOMString trackIdentifier; DOMString mid; DOMString remoteId; unsigned long framesDecoded; unsigned long keyFramesDecoded; unsigned long framesRendered; unsigned long framesDropped; unsigned long frameWidth; unsigned long frameHeight; double framesPerSecond; unsigned long long qpSum; double totalDecodeTime; double totalInterFrameDelay; double totalSquaredInterFrameDelay; unsigned long pauseCount; double totalPausesDuration; unsigned long freezeCount; double totalFreezesDuration; DOMHighResTimeStamp lastPacketReceivedTimestamp; unsigned long long headerBytesReceived; unsigned long long packetsDiscarded; unsigned long long fecBytesReceived; unsigned long long fecPacketsReceived; unsigned long long fecPacketsDiscarded; unsigned long long bytesReceived; unsigned long nackCount; unsigned long firCount; unsigned long pliCount; double totalProcessingDelay; DOMHighResTimeStamp estimatedPlayoutTimestamp; double jitterBufferDelay; double jitterBufferTargetDelay; unsigned long long jitterBufferEmittedCount; double jitterBufferMinimumDelay; unsigned long long totalSamplesReceived; unsigned long long concealedSamples; unsigned long long silentConcealedSamples; unsigned long long concealmentEvents; unsigned long long insertedSamplesForDeceleration; unsigned long long removedSamplesForAcceleration; double audioLevel; double totalAudioEnergy; double totalSamplesDuration; unsigned long framesReceived; DOMString decoderImplementation; DOMString playoutId; boolean powerEfficientDecoder; unsigned long framesAssembledFromMultiplePackets; double totalAssemblyTime; unsigned long long retransmittedPacketsReceived; unsigned long long retransmittedBytesReceived; unsigned long rtxSsrc; unsigned long fecSsrc; double totalCorruptionProbability; double totalSquaredCorruptionProbability; unsigned long long corruptionMeasurements; };
The value of the {{MediaStreamTrack}}'s id
attribute.
If the {{RTCRtpTransceiver}} owning this stream has a
{{RTCRtpTransceiver/mid}} value that is not null
,
this is that value, otherwise this member MUST NOT be
[= map/exist | present =].
The {{remoteId}} is used for looking up the remote {{RTCRemoteOutboundRtpStreamStats}} object for the same SSRC.
MUST NOT [= map/exist =] for audio. It represents the total number of frames correctly decoded for this RTP stream, i.e., frames that would be displayed if no frames are dropped.
MUST NOT [= map/exist =] for audio. It represents the total number of key frames, such as key
frames in VP8 [[RFC6386]] or IDR-frames in H.264 [[RFC6184]], successfully
decoded for this RTP media stream. This is a subset of
{{framesDecoded}}. framesDecoded - keyFramesDecoded
gives
you the number of delta frames decoded.
MUST NOT [= map/exist =] for audio. It represents the total number of frames that have been rendered. It is incremented just after a frame has been rendered.
MUST NOT [= map/exist =] for audio. The total number of frames dropped prior to decode or dropped because the frame missed its display deadline for this receiver's track. The measurement begins when the receiver is created and is a cumulative metric as defined in Appendix A (g) of [[!RFC7004]].
MUST NOT [= map/exist =] for audio. Represents the width of the last decoded frame. Before the first frame is decoded this member MUST NOT [= map/exist =].
MUST NOT [= map/exist =] for audio. Represents the height of the last decoded frame. Before the first frame is decoded this member MUST NOT [= map/exist =].
MUST NOT [= map/exist =] for audio. The number of decoded frames in the last second.
MUST NOT [= map/exist =] for audio. The sum of the QP values of frames decoded by this receiver. The count of frames is in {{framesDecoded}}.
The definition of QP value depends on the codec; for VP8, the QP value is the
value carried in the frame header as the syntax element y_ac_qi
, and defined in
[[RFC6386]] section 19.2. Its range is 0..127.
Note that the QP value is only an indication of quantizer values used; many formats have ways to vary the quantizer value within the frame.
MUST NOT [= map/exist =] for audio. Total number of seconds that have been spent decoding the {{framesDecoded}} frames of this stream. The average decode time can be calculated by dividing this value with {{framesDecoded}}. The time it takes to decode one frame is the time passed between feeding the decoder a frame and the decoder returning decoded data for that frame.
MUST NOT [= map/exist =] for audio. Sum of the interframe delays in seconds between consecutively rendered frames, recorded just after a frame has been rendered. The interframe delay variance be calculated from {{totalInterFrameDelay}}, {{totalSquaredInterFrameDelay}}, and {{framesRendered}} according to the formula: ({{totalSquaredInterFrameDelay}} - {{totalInterFrameDelay}}^2/ {{framesRendered}})/{{framesRendered}}.
MUST NOT [= map/exist =] for audio. Sum of the squared interframe delays in seconds between consecutively rendered frames, recorded just after a frame has been rendered. See {{totalInterFrameDelay}} for details on how to calculate the interframe delay variance.
pauseCount
of type
unsigned long
MUST NOT [= map/exist =] for audio. Count the total number of video pauses experienced by this receiver. Video is considered to be paused if time passed since last rendered frame exceeds 5 seconds. {{pauseCount}} is incremented when a frame is rendered after such a pause.
totalPausesDuration
of type
double
MUST NOT [= map/exist =] for audio. Total duration of pauses (for definition of pause see {{pauseCount}}), in seconds. This value is updated when a frame is rendered.
freezeCount
of type
unsigned long
MUST NOT [= map/exist =] for audio. Count the total number of video freezes experienced by this receiver. It is a freeze if frame duration, which is time interval between two consecutively rendered frames, is equal or exceeds Max(3 * avg_frame_duration_ms, avg_frame_duration_ms + 150), where avg_frame_duration_ms is linear average of durations of last 30 rendered frames.
totalFreezesDuration
of type
double
MUST NOT [= map/exist =] for audio. Total duration of rendered frames which are considered as frozen (for definition of freeze see {{freezeCount}}), in seconds. This value is updated when a frame is rendered.
Represents the timestamp at which the last packet was received for this SSRC. This differs from {{RTCStats/timestamp}}, which represents the time at which the statistics were generated by the local endpoint.
Total number of RTP header and padding bytes received for this SSRC.
This includes retransmissions.
This does not include the size of transport layer headers such as IP or UDP.
headerBytesReceived + bytesReceived
equals the number of bytes
received as payload over the transport.
The cumulative number of RTP packets discarded by the jitter buffer due to late or early-arrival, i.e., these packets are not played out. RTP packets discarded due to packet duplication are not reported in this metric [[XRBLOCK-STATS]]. Calculated as defined in [[!RFC7002]] section 3.2 and Appendix A.a.
Total number of RTP FEC bytes received for this SSRC, only including payload bytes. This is a subset of {{RTCInboundRtpStreamStats/bytesReceived}}. If a FEC mechanism that uses a different {{RTCRtpStreamStats/ssrc}} was negotiated, FEC packets are sent over a separate SSRC but is still accounted for here.
Total number of RTP FEC packets received for this SSRC. If a FEC mechanism that uses a different {{RTCRtpStreamStats/ssrc}} was negotiated, FEC packets are sent over a separate SSRC but is still accounted for here. This counter can also be incremented when receiving FEC packets in-band with media packets (e.g., with Opus).
Total number of RTP FEC packets received for this SSRC where the error correction payload was discarded by the application. This may happen 1. if all the source packets protected by the FEC packet were received or already recovered by a separate FEC packet, or 2. if the FEC packet arrived late, i.e., outside the recovery window, and the lost RTP packets have already been skipped during playout. This is a subset of {{fecPacketsReceived}}.
Total number of bytes received for this SSRC. This includes retransmissions. Calculated as defined in [[!RFC3550]] section 6.4.1.
MUST NOT [= map/exist =] for audio. Count the total number of Full Intra Request (FIR) packets, as defined in [[!RFC5104]] section 4.3.1, sent by this receiver. Does not count the RTCP FIR indicated in [[?RFC2032]] which was deprecated by [[?RFC4587]].
MUST NOT [= map/exist =] for audio. Count the total number of Picture Loss Indication (PLI) packets, as defined in [[!RFC4585]] section 6.3.1, sent by this receiver.
It is the sum of the time, in seconds, each audio sample or video frame takes from the time the first RTP packet is received (reception timestamp) and to the time the corresponding sample or frame is decoded (decoded timestamp). At this point the audio sample or video frame is ready for playout by the MediaStreamTrack. Typically ready for playout here means after the audio sample or video frame is fully decoded by the decoder.
Given the complexities involved, the time of arrival or the reception timestamp is measured as close to the network layer as possible and the decoded timestamp is measured as soon as the complete sample or frame is decoded.
In the case of audio, several samples are received in the same RTP packet, all samples will share the same reception timestamp and different decoded timestamps. In the case of video, the frame is received over several RTP packets, in this case the earliest timestamp containing the frame is counted as the reception timestamp, and the decoded timestamp corresponds to when the complete frame is decoded.
This metric is not incremented for frames that are not decoded,
i.e. {{RTCInboundRtpStreamStats/framesDropped}}.
The average processing delay can be calculated by dividing the {{totalProcessingDelay}} with the
{{framesDecoded}} for video (or povisional stats spec totalSamplesDecoded
for audio).
Count the total number of Negative ACKnowledgement (NACK) packets, as defined in [[!RFC4585]] section 6.2.1, sent by this receiver.
This is the estimated playout time of this receiver's track. The playout time is the NTP timestamp of the last playable audio sample or video frame that has a known timestamp (from an RTCP SR packet mapping RTP timestamps to NTP timestamps), extrapolated with the time elapsed since it was ready to be played out. This is the "current time" of the track in NTP clock time of the sender and can be present even if there is no audio currently playing.
This can be useful for estimating how much audio and video is out of sync for two tracks from the same source, audioInboundRtpStats.{{estimatedPlayoutTimestamp}} - videoInboundRtpStats.{{estimatedPlayoutTimestamp}}.
The purpose of the jitter buffer is to recombine RTP packets into frames (in the case of video) and have smooth playout. The model described here assumes that the samples or frames are still compressed and have not yet been decoded. It is the sum of the time, in seconds, each audio sample or a video frame takes from the time the first packet is received by the jitter buffer (ingest timestamp) to the time it exits the jitter buffer (emit timestamp). In the case of audio, several samples belong to the same RTP packet, hence they will have the same ingest timestamp but different jitter buffer emit timestamps. In the case of video, the frame maybe is received over several RTP packets, hence the ingest timestamp is the earliest packet of the frame that entered the jitter buffer and the emit timestamp is when the whole frame exits the jitter buffer. This metric increases upon samples or frames exiting, having completed their time in the buffer (and incrementing {{jitterBufferEmittedCount}}). The average jitter buffer delay can be calculated by dividing the {{jitterBufferDelay}} with the {{jitterBufferEmittedCount}}.
jitterBufferTargetDelay
of type double
This value is increased by the target jitter buffer delay every time a sample is emitted by the jitter buffer. The added target is the target delay, in seconds, at the time that the sample was emitted from the jitter buffer. To get the average target delay, divide by {{jitterBufferEmittedCount}}.
The total number of audio samples or video frames that have come out of the jitter buffer (increasing {{jitterBufferDelay}}).
There are various reasons why the jitter buffer delay might be increased to a higher value, such as to achieve AV synchronization or because a jitterBufferTarget was set on a RTCRtpReceiver. When using one of these mechanisms, it can be useful to keep track of the minimal jitter buffer delay that could have been achieved, so WebRTC clients can track the amount of additional delay that is being added.
This metric works the same way as {{jitterBufferTargetDelay}}, except that it is not affected by external mechanisms that increase the jitter buffer target delay, such as jitterBufferTarget (see link above), AV sync, or any other mechanisms. This metric is purely based on the network characteristics such as jitter and packet loss, and can be seen as the minimum obtainable jitter buffer delay if no external factors would affect it. The metric is updated every time {{jitterBufferEmittedCount}} is updated.
MUST NOT [= map/exist =] for video. The total number of samples that have been received on this RTP stream. This includes {{concealedSamples}}.
MUST NOT [= map/exist =] for video. The total number of samples that are concealed samples. A concealed sample is a sample that was replaced with synthesized samples generated locally before being played out. Examples of samples that have to be concealed are samples from lost packets (reported in {{RTCReceivedRtpStreamStats/packetsLost}}) or samples from packets that arrive too late to be played out (reported in {{RTCInboundRtpStreamStats/packetsDiscarded}}).
MUST NOT [= map/exist =] for video. The total number of concealed samples inserted that are "silent". Playing out silent samples results in silence or comfort noise. This is a subset of {{concealedSamples}}.
MUST NOT [= map/exist =] for video. The number of concealment events. This counter increases every time a concealed sample is synthesized after a non-concealed sample. That is, multiple consecutive concealed samples will increase the {{concealedSamples}} count multiple times but is a single concealment event.
MUST NOT [= map/exist =] for video. When playout is slowed down, this counter is increased by the difference between the number of samples received and the number of samples played out. If playout is slowed down by inserting samples, this will be the number of inserted samples.
MUST NOT [= map/exist =] for video. When playout is sped up, this counter is increased by the difference between the number of samples received and the number of samples played out. If speedup is achieved by removing samples, this will be the count of samples removed.
MUST NOT [= map/exist =] for video. Represents the audio level of the receiving track. For audio levels of tracks attached locally, see {{RTCAudioSourceStats}} instead.
The value is between 0..1 (linear), where 1.0 represents 0 dBov, 0 represents silence, and 0.5 represents approximately 6 dBSPL change in the sound pressure level from 0 dBov.
The {{audioLevel}} is averaged over some small interval, using the algorithm described under {{totalAudioEnergy}}. The interval used is implementation dependent.
MUST NOT [= map/exist =] for video. Represents the audio energy of the receiving track. For audio energy of tracks attached locally, see {{RTCAudioSourceStats}} instead.
This value MUST be computed as follows: for each audio sample that is received
(and thus counted by {{totalSamplesReceived}}), add the sample's
value divided by the highest-intensity encodable value, squared and then
multiplied by the duration of the sample in seconds. In other words,
duration * Math.pow(energy/maxEnergy, 2)
.
This can be used to obtain a root mean square (RMS) value that uses the same
units as {{audioLevel}}, as defined in [[RFC6464]]. It can be
converted to these units using the formula
Math.sqrt(totalAudioEnergy/totalSamplesDuration)
. This calculation
can also be performed using the differences between the values of two different
{{RTCPeerConnection/getStats()}} calls, in order to compute the average audio level over
any desired time interval. In other words, do Math.sqrt((energy2 -
energy1)/(duration2 - duration1))
.
For example, if a 10ms packet of audio is produced with an RMS of 0.5 (out of
1.0), this should add 0.5 * 0.5 * 0.01 = 0.0025
to
{{totalAudioEnergy}}. If another 10ms packet with an RMS of 0.1 is
received, this should similarly add 0.0001
to
{{totalAudioEnergy}}. Then,
Math.sqrt(totalAudioEnergy/totalSamplesDuration)
becomes
Math.sqrt(0.0026/0.02) = 0.36
, which is the same value that would be
obtained by doing an RMS calculation over the contiguous 20ms segment of audio.
If multiple audio channels are used, the audio energy of a sample refers to the highest energy of any channel.
MUST NOT [= map/exist =] for video. Represents the audio duration of the receiving track. For audio durations of tracks attached locally, see {{RTCAudioSourceStats}} instead.
Represents the total duration in seconds of all samples that have been received (and thus counted by {{totalSamplesReceived}}). Can be used with {{totalAudioEnergy}} to compute an average audio level over different intervals.
MUST NOT [= map/exist =] for audio. Represents the total number of complete frames received on this RTP stream. This metric is incremented when the complete frame is received.
MUST NOT [= map/exist =] unless [= exposing hardware is allowed =].
MUST NOT [= map/exist =] for audio. Identifies the decoder implementation used. This is useful for diagnosing interoperability issues.
MUST NOT [= map/exist =] for video. If audio playout is happening, this is used to look up the corresponding {{RTCAudioPlayoutStats}}.
MUST NOT [= map/exist =] unless [= exposing hardware is allowed =].
MUST NOT [= map/exist =] for audio. Whether the decoder currently used is considered power efficient by the user agent. This SHOULD reflect if the configuration results in hardware acceleration, but the user agent MAY take other information into account when deciding if the configuration is considered power efficient.
MUST NOT [= map/exist =] for audio. It represents the total number of frames correctly decoded for this RTP stream that consist of more than one RTP packet. For such frames the {{totalAssemblyTime}} is incremented. The average frame assembly time can be calculated by dividing the {{totalAssemblyTime}} with {{framesAssembledFromMultiplePackets}}.
MUST NOT [= map/exist =] for audio. The sum of the time, in seconds, each video frame takes from the time the first RTP packet is received (reception timestamp) and to the time the last RTP packet of a frame is received. Only incremented for frames consisting of more than one RTP packet.
Given the complexities involved, the time of arrival or the reception timestamp is measured as close to the network layer as possible. This metric is not incremented for frames that are not decoded, i.e., {{framesDropped}} or frames that fail decoding for other reasons (if any). Only incremented for frames consisting of more than one RTP packet.
The total number of retransmitted packets that were received for this SSRC. This is a subset of {{RTCReceivedRtpStreamStats/packetsReceived}}. If RTX is not negotiated, retransmitted packets can not be identified and this member MUST NOT [= map/exist =].
The total number of retransmitted bytes that were received for this SSRC, only including payload bytes. This is a subset of {{RTCInboundRtpStreamStats/bytesReceived}}. If RTX is not negotiated, retransmitted packets can not be identified and this member MUST NOT [= map/exist =].
If RTX is negotiated for retransmissions on a separate RTP stream, this is the SSRC of the RTX stream that is associated with this stream's {{RTCRtpStreamStats/ssrc}}. If RTX is not negotiated, this value MUST NOT be [= map/exist | present =].
If a FEC mechanism that uses a separate RTP stream is negotiated, this is the SSRC of the FEC stream that is associated with this stream's {{RTCRtpStreamStats/ssrc}}. If FEC is not negotiated or uses the same RTP stream, this value MUST NOT be [= map/exist | present =].
MUST NOT [= map/exist =] for audio. Represents the cumulative sum of all corruption probability measurements that have been made for this SSRC, see {{corruptionMeasurements}} regarding when this attribute SHOULD be [= map/exist | present =].
Each measurement added to {{totalCorruptionProbability}} MUST be in the range [0.0, 1.0], where a value of 0.0 indicates the system has estimated there is no or negligible corruption present in the processed frame. Similarly a value of 1.0 indicates there is almost certainly a corruption visible in the processed frame. A value in between those two indicates there is likely some corruption visible, but it could for instance have a low magnitude or be present only in a small portion of the frame.
The corruption likelihood values are estimates - not guarantees. Even if the estimate is 0.0, there could be corruptions present (i.e. it's a false negative) for instance if only a very small area of the frame is affected. Similarly, even if the estimate is 1.0 there might not be a corruption present (i.e. it's a false positive) for instance if there are macroblocks with a QP far higher than the frame average. Just like there are edge cases for e.g. PSNR measurements, these metrics should primarily be used as a basis for statistical analysis rather than be used as an absolute truth on a per-frame basis.
MUST NOT [= map/exist =] for audio. Represents the cumulative sum of all corruption probability measurements squared that have been made for this SSRC, see {{corruptionMeasurements}} regarding when this attribute SHOULD be [= map/exist | present =].
MUST NOT [= map/exist =] for audio. When the user agent is able to make a corruption probability measurement, this counter is incremented for each such measurement and {{totalCorruptionProbability}} and {{totalSquaredCorruptionProbability}} are aggregated with this measurement and measurement squared respectively. If the corruption-detection header extension is present in the RTP packets, corruption probability measurements MUST be [= map/exist | present =].
The corruption-detection header extension documented at http://www.webrtc.org/experiments/rtp-hdrext/corruption-detection is experimental. The identifier and format may change once an IETF standard has been established.
The RTCRemoteInboundRtpStreamStats dictionary represents the remote endpoint's measurement metrics for a particular incoming RTP stream (corresponding to an outgoing RTP stream at the sending endpoint). The timestamp reported in the statistics object is the time at which the corresponding RTCP RR was received.
dictionary RTCRemoteInboundRtpStreamStats : RTCReceivedRtpStreamStats { DOMString localId; double roundTripTime; double totalRoundTripTime; double fractionLost; unsigned long long roundTripTimeMeasurements; };
The {{localId}} is used for looking up the local {{RTCOutboundRtpStreamStats}} object for the same SSRC.
Estimated round trip time for this SSRC based on the RTCP timestamps in the RTCP Receiver Report (RR) and measured in seconds. Calculated as defined in section 6.4.1. of [[!RFC3550]]. MUST NOT [= map/exist =] until a RTCP Receiver Report is received with a DLSR value other than 0 has been received.
Represents the cumulative sum of all round trip time measurements in seconds since the beginning of the session. The individual round trip time is calculated based on the RTCP timestamps in the RTCP Receiver Report (RR) [[!RFC3550]], hence requires a DLSR value other than 0. The average round trip time can be computed from {{totalRoundTripTime}} by dividing it by {{roundTripTimeMeasurements}}.
The fraction packet loss reported for this SSRC. Calculated as defined in [[!RFC3550]] section 6.4.1 and Appendix A.3.
Represents the total number of RTCP RR blocks received for this SSRC that contain a valid round trip time. This counter will not increment if the {{roundTripTime}} can not be calculated because no RTCP Receiver Report with a DLSR value other than 0 has been received.
dictionary RTCSentRtpStreamStats : RTCRtpStreamStats { unsigned long long packetsSent; unsigned long long bytesSent; };
Total number of RTP packets sent for this SSRC. This includes retransmissions. Calculated as defined in [[!RFC3550]] section 6.4.1.
Total number of bytes sent for this SSRC. This includes retransmissions. Calculated as defined in [[!RFC3550]] section 6.4.1.
The RTCOutboundRtpStreamStats dictionary represents the measurement metrics for the outgoing RTP stream. The timestamp reported in the statistics object is the time at which the data was sampled.
dictionary RTCOutboundRtpStreamStats : RTCSentRtpStreamStats { DOMString mid; DOMString mediaSourceId; DOMString remoteId; DOMString rid; unsigned long long headerBytesSent; unsigned long long retransmittedPacketsSent; unsigned long long retransmittedBytesSent; unsigned long rtxSsrc; double targetBitrate; unsigned long long totalEncodedBytesTarget; unsigned long frameWidth; unsigned long frameHeight; double framesPerSecond; unsigned long framesSent; unsigned long hugeFramesSent; unsigned long framesEncoded; unsigned long keyFramesEncoded; unsigned long long qpSum; double totalEncodeTime; double totalPacketSendDelay; RTCQualityLimitationReason qualityLimitationReason; record<DOMString, double> qualityLimitationDurations; unsigned long qualityLimitationResolutionChanges; unsigned long nackCount; unsigned long firCount; unsigned long pliCount; DOMString encoderImplementation; boolean powerEfficientEncoder; boolean active; DOMString scalabilityMode; };
If the {{RTCRtpTransceiver}} owning this stream has a
{{RTCRtpTransceiver/mid}} value that is not null
,
this is that value, otherwise this member MUST NOT be
[= map/exist | present =].
The identifier of the stats object representing the track currently attached to the sender of this stream, an {{RTCMediaSourceStats}}.
The {{remoteId}} is used for looking up the remote {{RTCRemoteInboundRtpStreamStats}} object for the same SSRC.
MUST NOT [= map/exist =] for audio. Only [= map/exist =]s if a {{RTCRtpCodingParameters/rid}} has been set for this RTP stream. If {{RTCRtpCodingParameters/rid}} is set this value will be present regardless if the RID RTP header extension has been negotiated.
Total number of RTP header and padding bytes sent for this SSRC. This does not
include the size of transport layer headers such as IP or UDP.
headerBytesSent + bytesSent
equals the number of bytes sent as
payload over the transport.
The total number of packets that were retransmitted for this SSRC. This is a subset of {{RTCSentRtpStreamStats/packetsSent}}. If RTX is not negotiated, retransmitted packets are sent over this {{RTCRtpStreamStats/ssrc}}. If RTX was negotiated, retransmitted packets are sent over a separate SSRC but is still accounted for here.
The total number of bytes that were retransmitted for this SSRC, only including payload bytes. This is a subset of {{RTCSentRtpStreamStats/bytesSent}}. If RTX is not negotiated, retransmitted bytes are sent over this {{RTCRtpStreamStats/ssrc}}. If RTX was negotiated, retransmitted bytes are sent over a separate SSRC but is still accounted for here.
If RTX is negotiated for retransmissions on a separate RTP stream, this is the SSRC of the RTX stream that is associated with this stream's {{RTCRtpStreamStats/ssrc}}. If RTX is not negotiated, this value MUST NOT be [= map/exist | present =].
Reflects the current encoder target in bits per second. The target is an instantanous value reflecting the encoder's settings, but the resulting payload bytes sent per second, excluding retransmissions, SHOULD closely correlate to the target. See also {{RTCSentRtpStreamStats/bytesSent}} and {{RTCOutboundRtpStreamStats/retransmittedBytesSent}}. The {{targetBitrate}} is defined in the same way as the Transport Independent Application Specific (TIAS) bitrate [[!RFC3890]].
MUST NOT [= map/exist =] for audio. This value is increased by the target frame size in bytes every time a frame has been encoded. The actual frame size may be bigger or smaller than this number. This value goes up every time {{framesEncoded}} goes up.
MUST NOT [= map/exist =] for audio. Represents the width of the last encoded frame. The resolution of the encoded frame may be lower than the media source (see {{RTCVideoSourceStats.width}}). Before the first frame is encoded this member MUST NOT [= map/exist =].
MUST NOT [= map/exist =] for audio. Represents the height of the last encoded frame. The resolution of the encoded frame may be lower than the media source (see {{RTCVideoSourceStats.height}}). Before the first frame is encoded this member MUST NOT [= map/exist =].
MUST NOT [= map/exist =] for audio. The number of encoded frames during the last second. This may be lower than the media source frame rate (see {{RTCVideoSourceStats.framesPerSecond}}).
MUST NOT [= map/exist =] for audio. Represents the total number of frames sent on this RTP stream.
MUST NOT [= map/exist =] for audio. Represents the total number of huge frames sent by this RTP stream. Huge frames, by definition, are frames that have an encoded size at least 2.5 times the average size of the frames. The average size of the frames is defined as the target bitrate per second divided by the target FPS at the time the frame was encoded. These are usually complex to encode frames with a lot of changes in the picture. This can be used to estimate, e.g slide changes in the streamed presentation.
The multiplier of 2.5 is choosen from analyzing encoded frame sizes for a sample presentation using WebRTC standalone implementation. 2.5 is a reasonably large multiplier which still caused all slide change events to be identified as a huge frames. It, however, produced 1.4% of false positive slide change detections which is deemed reasonable.
MUST NOT [= map/exist =] for audio. It represents the total number of frames successfully encoded for this RTP media stream.
MUST NOT [= map/exist =] for audio.
It represents the total number of key frames, such as key
frames in VP8 [[RFC6386]] or IDR-frames in H.264 [[RFC6184]], successfully
encoded for this RTP media stream. This is a subset of
{{framesEncoded}}. framesEncoded - keyFramesEncoded
gives
you the number of delta frames encoded.
MUST NOT [= map/exist =] for audio. The sum of the QP values of frames encoded by this sender. The count of frames is in {{framesEncoded}}.
The definition of QP value depends on the codec; for VP8, the QP value is the
value carried in the frame header as the syntax element y_ac_qi
, and defined in
[[RFC6386]] section 19.2. Its range is 0..127.
Note that the QP value is only an indication of quantizer values used; many formats have ways to vary the quantizer value within the frame.
MUST NOT [= map/exist =] for audio. Total number of seconds that has been spent encoding the {{framesEncoded}} frames of this stream. The average encode time can be calculated by dividing this value with {{framesEncoded}}. The time it takes to encode one frame is the time passed between feeding the encoder a frame and the encoder returning encoded data for that frame. This does not include any additional time it may take to packetize the resulting data.
The total number of seconds that packets have spent buffered locally before being transmitted onto the network. The time is measured from when a packet is emitted from the RTP packetizer until it is handed over to the OS network socket. This measurement is added to {{totalPacketSendDelay}} when {{RTCSentRtpStreamStats/packetsSent}} is incremented.
MUST NOT [= map/exist =] for audio. The current reason for limiting the resolution and/or framerate, or {{RTCQualityLimitationReason/"none"}} if not limited.
The implementation reports the most limiting factor. If the implementation is not able to determine the most limiting factor because multiple may exist, the reasons MUST be reported in the following order of priority: "bandwidth", "cpu", "other".
The consumption of CPU and bandwidth resources is interdependent and difficult to estimate, making it hard to determine what the "most limiting factor" is. The priority order promoted here is based on the heuristic that "bandwidth" is generally more varying and thus a more likely and more useful signal than "cpu".
MUST NOT [= map/exist =] for audio. A record of the total time, in seconds, that this stream has spent in each quality limitation state. The record includes a mapping for all {{RTCQualityLimitationReason}} types, including {{RTCQualityLimitationReason/"none"}}.
The sum of all entries minus {{qualityLimitationDurations}}[{{RTCQualityLimitationReason/"none"}}] gives the total time that the stream has been limited.
MUST NOT [= map/exist =] for audio. The number of times that the resolution has changed because we are quality limited ({{qualityLimitationReason}} has a value other than {{RTCQualityLimitationReason/"none"}}). The counter is initially zero and increases when the resolution goes up or down. For example, if a 720p track is sent as 480p for some time and then recovers to 720p, {{qualityLimitationResolutionChanges}} will have the value 2.
Count the total number of Negative ACKnowledgement (NACK) packets, as defined in [[!RFC4585]] section 6.2.1, received by this sender.
MUST NOT [= map/exist =] for audio. Count the total number of Full Intra Request (FIR) packets, as defined in [[!RFC5104]] section 4.3.1, received by this sender. Does not count the RTCP FIR indicated in [[?RFC2032]] which was deprecated by [[?RFC4587]].
MUST NOT [= map/exist =] for audio. Count the total number of Picture Loss Indication (PLI) packets, as defined in [[!RFC4585]] section 6.3.1, received by this sender
MUST NOT [= map/exist =] unless [= exposing hardware is allowed =].
MUST NOT [= map/exist =] for audio. Identifies the encoder implementation used. This is useful for diagnosing interoperability issues.
MUST NOT [= map/exist =] unless [= exposing hardware is allowed =].
MUST NOT [= map/exist =] for audio. Whether the encoder currently used is considered power efficient by the user agent. This SHOULD reflect if the configuration results in hardware acceleration, but the user agent MAY take other information into account when deciding if the configuration is considered power efficient.
Indicates whether this RTP stream is configured to be sent or disabled. Note that an active stream can still not be sending, e.g. when being limited by network conditions.
MUST NOT [= map/exist =] for audio. Only [= map/exist =]s when a scalability mode is currently configured for this RTP stream.
enum RTCQualityLimitationReason { "none", "cpu", "bandwidth", "other", };
Enum value | Description |
---|---|
none |
The resolution and/or framerate is not limited. |
cpu |
The resolution and/or framerate is primarily limited due to CPU load. |
bandwidth |
The resolution and/or framerate is primarily limited due to congestion cues during bandwidth estimation. Typical, congestion control algorithms use inter-arrival time, round-trip time, packet or other congestion cues to perform bandwidth estimation. |
other |
The resolution and/or framerate is primarily limited for a reason other than the above. |
The RTCRemoteOutboundRtpStreamStats dictionary represents the remote endpoint's measurement metrics for its outgoing RTP stream (corresponding to an outgoing RTP stream at the sending endpoint). The timestamp reported in the statistics object is the time at which the corresponding RTCP SR was received.
dictionary RTCRemoteOutboundRtpStreamStats : RTCSentRtpStreamStats { DOMString localId; DOMHighResTimeStamp remoteTimestamp; unsigned long long reportsSent; double roundTripTime; double totalRoundTripTime; unsigned long long roundTripTimeMeasurements; };
The {{localId}} is used for looking up the local {{RTCInboundRtpStreamStats}} object for the same SSRC.
{{remoteTimestamp}}, of type {{DOMHighResTimeStamp}} [[!HIGHRES-TIME]], represents the remote timestamp at which these statistics were sent by the remote endpoint. This differs from {{RTCStats/timestamp}}, which represents the time at which the statistics were generated or received by the local endpoint. The {{remoteTimestamp}}, if present, is derived from the NTP timestamp in an RTCP Sender Report (SR) block, which reflects the remote endpoint's clock. That clock may not be synchronized with the local clock.
Represents the total number of RTCP Sender Report (SR) blocks sent for this SSRC.
Estimated round trip time for this SSRC based on the latest RTCP Sender Report (SR) that contains a DLRR report block as defined in [[!RFC3611]]. The Calculation of the round trip time is defined in section 4.5. of [[!RFC3611]]. MUST NOT [= map/exist =] if the latest SR does not contain the DLRR report block, or if the last RR timestamp in the DLRR report block is zero, or if the delay since last RR value in the DLRR report block is zero.
Represents the cumulative sum of all round trip time measurements in seconds since the beginning of the session. The individual round trip time is calculated based on the DLRR report block in the RTCP Sender Report (SR) [[!RFC3611]]. This counter will not increment if the {{roundTripTime}} can not be calculated. The average round trip time can be computed from {{totalRoundTripTime}} by dividing it by {{roundTripTimeMeasurements}}.
Represents the total number of RTCP Sender Report (SR) blocks received for this SSRC that contain a DLRR report block that can derive a valid round trip time according to [[!RFC3611]]. This counter will not increment if the {{roundTripTime}} can not be calculated.
The RTCMediaSourceStats dictionary represents a track that is currently attached to one or more senders. It contains information about media sources such as frame rate and resolution prior to encoding. This is the media passed from the {{MediaStreamTrack}} to the {{RTCRtpSender}}s. This is in contrast to {{RTCOutboundRtpStreamStats}} whose members describe metrics as measured after the encoding step. For example, a track may be captured from a high-resolution camera, its frames downscaled due to track constraints and then further downscaled by the encoders due to CPU and network conditions. This dictionary reflects the video frames or audio samples passed out from the track - after track constraints have been applied but before any encoding or further donwsampling occurs.
Media source objects are of either subdictionary {{RTCAudioSourceStats}} or
{{RTCVideoSourceStats}}. The {{RTCStats/type}} is the same
({{RTCStatsType/"media-source"}}) but {{RTCMediaSourceStats/kind}} is different ("audio"
or
"video"
) depending on the kind of track.
The media source stats objects are created when a track is attached to any {{RTCRtpSender}} and may subsequently be attached to multiple senders during its life. The life of this object ends when the track is no longer attached to any sender of the same {{RTCPeerConnection}}. If a track whose media source object ended is attached again this results in a new media source stats object whose counters (such as number of frames) are reset.
dictionary RTCMediaSourceStats : RTCStats { required DOMString trackIdentifier; required DOMString kind; };
The value of the {{MediaStreamTrack}}'s id
attribute.
The value of the {{MediaStreamTrack}}'s kind
attribute.
This is either "audio"
or "video"
. If it is
"audio"
then this stats object is of type
{{RTCAudioSourceStats}}. If it is "video"
then this stats
object is of type {{RTCVideoSourceStats}}.
The RTCAudioSourceStats dictionary represents an audio track that is attached
to one or more senders. It is an {{RTCMediaSourceStats}} whose
{{RTCMediaSourceStats/kind}} is "audio"
.
dictionary RTCAudioSourceStats : RTCMediaSourceStats { double audioLevel; double totalAudioEnergy; double totalSamplesDuration; double echoReturnLoss; double echoReturnLossEnhancement; };
Represents the audio level of the media source. For audio levels of remotely sourced tracks, see {{RTCInboundRtpStreamStats}} instead.
The value is between 0..1 (linear), where 1.0 represents 0 dBov, 0 represents silence, and 0.5 represents approximately 6 dBSPL change in the sound pressure level from 0 dBov.
The {{audioLevel}} is averaged over some small interval, using the algorithm described under {{totalAudioEnergy}}. The interval used is implementation dependent.
Represents the audio energy of the media source. For audio energy of remotely sourced tracks, see {{RTCInboundRtpStreamStats}} instead.
This value MUST be computed as follows: for each audio sample produced by the
media source during the lifetime of this stats object, add the sample's value
divided by the highest-intensity encodable value, squared and then multiplied by
the duration of the sample in seconds. In other words, duration *
Math.pow(energy/maxEnergy, 2)
.
This can be used to obtain a root mean square (RMS) value that uses the same
units as {{audioLevel}}, as defined in [[RFC6464]]. It can be
converted to these units using the formula
Math.sqrt(totalAudioEnergy/totalSamplesDuration)
. This calculation
can also be performed using the differences between the values of two different
{{RTCPeerConnection/getStats()}} calls, in order to compute the average audio level over
any desired time interval. In other words, do Math.sqrt((energy2 -
energy1)/(duration2 - duration1))
.
For example, if a 10ms packet of audio is produced with an RMS of 0.5 (out of
1.0), this should add 0.5 * 0.5 * 0.01 = 0.0025
to
{{totalAudioEnergy}}. If another 10ms packet with an RMS of 0.1 is
received, this should similarly add 0.0001
to
{{totalAudioEnergy}}. Then,
Math.sqrt(totalAudioEnergy/totalSamplesDuration)
becomes
Math.sqrt(0.0026/0.02) = 0.36
, which is the same value that would be
obtained by doing an RMS calculation over the contiguous 20ms segment of audio.
If multiple audio channels are used, the audio energy of a sample refers to the highest energy of any channel.
Represents the audio duration of the media source. For audio durations of remotely sourced tracks, see {{RTCInboundRtpStreamStats}} instead.
Represents the total duration in seconds of all samples that have been produced by this source for the lifetime of this stats object. Can be used with {{totalAudioEnergy}} to compute an average audio level over different intervals.
Only [= map/exist =]s when the {{MediaStreamTrack}} is sourced from a microphone where echo cancellation is applied. Calculated in decibels, as defined in [[!ECHO]] (2012) section 3.14.
If multiple audio channels are used, the channel of the least audio energy is considered for any sample.
Only [= map/exist =]s when the {{MediaStreamTrack}} is sourced from a microphone where echo cancellation is applied. Calculated in decibels, as defined in [[!ECHO]] (2012) section 3.15.
If multiple audio channels are used, the channel of the least audio energy is considered for any sample.
The RTCVideoSourceStats dictionary represents a video track that is attached
to one or more senders. It is an {{RTCMediaSourceStats}} whose
{{RTCMediaSourceStats/kind}} is "video"
.
dictionary RTCVideoSourceStats : RTCMediaSourceStats { unsigned long width; unsigned long height; unsigned long frames; double framesPerSecond; };
The width, in pixels, of the last frame originating from this source. Before a frame has been produced this member MUST NOT [= map/exist =].
The height, in pixels, of the last frame originating from this source. Before a frame has been produced this member MUST NOT [= map/exist =].
The total number of frames originating from this source.
The number of frames originating from this source, measured during the last second. For the first second of this object's lifetime this member MUST NOT [= map/exist =].
Only applicable if the playout path represents an audio device. Represents one playout path - if the same playout stats object is referenced by multiple {{RTCInboundRtpStreamStats}} this is an indication that audio mixing is happening in which case sample counters in this stats object refer to the samples after mixing.
dictionary RTCAudioPlayoutStats : RTCStats { required DOMString kind; double synthesizedSamplesDuration; unsigned long synthesizedSamplesEvents; double totalSamplesDuration; double totalPlayoutDelay; unsigned long long totalSamplesCount; };
The {{RTCAudioPlayoutStats}} dictionary and all of its metrics are features at risk due to lack of consensus.
For audio playout, this has the value
"audio
". This reflects the
kind
attribute
{{MediaStreamTrack}}(s) being played out.
If the playout path is unable to produce audio samples on time for device playout, samples are synthesized to be playout out instead. {{synthesizedSamplesDuration}} is measured in seconds and is incremented each time an audio sample is synthesized by this playout path. This metric can be used together with {{totalSamplesDuration}} to calculate the percentage of played out media being synthesized.
Synthesization typically only happens if the pipeline is underperforming. Samples synthesized by the {{RTCInboundRtpStreamStats}} are not counted for here, but in {{RTCInboundRtpStreamStats}}.{{RTCInboundRtpStreamStats/concealedSamples}}.
The number of synthesized samples events. This counter increases every time a sample is synthesized after a non-synthesized sample. That is, multiple consecutive synthesized samples will increase {{synthesizedSamplesDuration}} multiple times but is a single synthesization samples event.
The total duration, in seconds, of all audio samples that have been playout. Includes both synthesized and non-synthesized samples.
When audio samples are pulled by the playout device, this counter is incremented with the estimated delay of the playout path for that audio sample. The playout delay includes the delay from being emitted to the actual time of playout on the device. This metric can be used together with {{totalSamplesCount}} to calculate the average playout delay per sample.
When audio samples are pulled by the playout device, this counter is incremented with the number of samples emitted for playout.
dictionary RTCPeerConnectionStats : RTCStats { unsigned long dataChannelsOpened; unsigned long dataChannelsClosed; };
Represents the number of unique {{RTCDataChannel}}s that have entered the {{RTCDataChannelState/"open"}} state during their lifetime.
Represents the number of unique {{RTCDataChannel}}s that have left the {{RTCDataChannelState/"open"}} state during their lifetime (due to being closed by either end or the underlying transport being closed). {{RTCDataChannel}}s that transition from {{RTCDataChannelState/"connecting"}} to {{RTCDataChannelState/"closing"}} or {{RTCDataChannelState/"closed"}} without ever being {{RTCDataChannelState/"open"}} are not counted in this number.
The total number of open data channels at any time can be calculated as {{RTCPeerConnectionStats/dataChannelsOpened}} - {{RTCPeerConnectionStats/dataChannelsClosed}}. This number is always positive.
dictionary RTCDataChannelStats : RTCStats { DOMString label; DOMString protocol; unsigned short dataChannelIdentifier; required RTCDataChannelState state; unsigned long messagesSent; unsigned long long bytesSent; unsigned long messagesReceived; unsigned long long bytesReceived; };
The {{RTCDataChannel/id}} attribute of the {{RTCDataChannel}} object.
Represents the total number of API "message" events sent.
Represents the total number of payload bytes sent on this {{RTCDataChannel}}, i.e., not including headers or padding.
Represents the total number of API "message" events received.
Represents the total number of bytes received on this {{RTCDataChannel}}, i.e., not including headers or padding.
An {{RTCTransportStats}} object represents the stats corresponding to an {{RTCDtlsTransport}} and its underlying {{RTCIceTransport}}. When bundling is used, a single transport will be used for all {{MediaStreamTrack}}s in the bundle group. If bundling is not used, different {{MediaStreamTrack}} will use different transports. Bundling is described in [[!WEBRTC]].
dictionary RTCTransportStats : RTCStats { unsigned long long packetsSent; unsigned long long packetsReceived; unsigned long long bytesSent; unsigned long long bytesReceived; RTCIceRole iceRole; DOMString iceLocalUsernameFragment; required RTCDtlsTransportState dtlsState; RTCIceTransportState iceState; DOMString selectedCandidatePairId; DOMString localCertificateId; DOMString remoteCertificateId; DOMString tlsVersion; DOMString dtlsCipher; RTCDtlsRole dtlsRole; DOMString srtpCipher; unsigned long selectedCandidatePairChanges; };
Represents the total number of packets sent over this transport.
Represents the total number of packets received on this transport.
Represents the total number of payload bytes sent on this {{RTCIceTransport}}, i.e., not including headers, padding or ICE connectivity checks.
Represents the total number of payload bytes received on this {{RTCIceTransport}}, i.e., not including headers, padding or ICE connectivity checks.
Set to the current value of the {{RTCIceTransport/role}} attribute of the underlying {{RTCDtlsTransport}}.{{RTCDtlsTransport/iceTransport}}.
Set to the current value of the local username fragment used in message validation procedures [[RFC5245]] for this {{RTCIceTransport}}. It may be updated on setLocalDescription() and on ICE restart.
Set to the current value of the {{RTCDtlsTransport/state}} attribute of the underlying {{RTCDtlsTransport}}.
Set to the current value of the {{RTCIceTransport/state}} attribute of the underlying {{RTCIceTransport}}.
It is a unique identifier that is associated to the object that was inspected to produce the {{RTCIceCandidatePairStats}} associated with this transport.
For components where DTLS is negotiated, give local certificate.
For components where DTLS is negotiated, give remote certificate.
For components where DTLS is negotiated, the TLS version agreed. Only [= map/exist =]s after DTLS negotiation is complete.
The value comes from ServerHello.supported_versions
if present, otherwise from
ServerHello.version
. It is represented as four upper case
hexadecimal digits, representing the two
bytes of the version.
Descriptive name of the cipher suite used for the DTLS transport, as defined in the "Description" column of the IANA cipher suite registry [[!IANA-TLS-CIPHERS]].
{{RTCDtlsRole/"client"}} or {{RTCDtlsRole/"server"}} depending on the DTLS role. {{RTCDtlsRole/"unknown"}} before the DTLS negotiation starts.
Descriptive name of the protection profile used for the SRTP transport, as defined in the "Profile" column of the IANA DTLS-SRTP protection profile registry [[!IANA-DTLS-SRTP]] and described further in [[RFC5764]].
The number of times that the selected candidate pair of this transport has changed. Going from not having a selected candidate pair to having a selected candidate pair, or the other way around, also increases this counter. It is initially zero and becomes one when an initial candidate pair is selected.
enum RTCDtlsRole { "client", "server", "unknown", };
Enum value | Description |
---|---|
client |
The RTCPeerConnection is acting as a DTLS client as defined in [[RFC6347]]. |
server |
The RTCPeerConnection is acting as a DTLS server as defined in [[RFC6347]]. |
unknown |
The DTLS role of the RTCPeerConnection has not been determined yet. |
{{RTCIceCandidateStats}} reflects the properties of a candidate
in
Section 15.1 of [[!RFC5245]]. It corresponds to a {{RTCIceCandidate}} object.
dictionary RTCIceCandidateStats : RTCStats { required DOMString transportId; DOMString? address; long port; DOMString protocol; required RTCIceCandidateType candidateType; long priority; DOMString url; RTCIceServerTransportProtocol relayProtocol; DOMString foundation; DOMString relatedAddress; long relatedPort; DOMString usernameFragment; RTCIceTcpCandidateType tcpType; };
It is a unique identifier that is associated to the object that was inspected to produce the {{RTCTransportStats}} associated with this candidate.
It is the address of the candidate, allowing for IPv4 addresses, IPv6 addresses, and fully qualified domain names (FQDNs). See [[!RFC5245]] section 15.1 for details.
The user agent should make sure that only remote candidate addresses that the web
application has configured on the corresponding {{RTCPeerConnection}} are exposed;
This is especially important for peer reflexive remote candidates. By default,
the user agent MUST leave the {{RTCIceCandidateStats/address}} member as null
in the
{{RTCIceCandidateStats}} dictionary of any remote candidate. Once a {{RTCPeerConnection}}
instance learns on an address by the web application using {{RTCPeerConnection/addIceCandidate()}},
the user agent can expose the 'address' member value in any remote
{{RTCIceCandidateStats}} dictionary of the corresponding {{RTCPeerConnection}} that
matches the newly learnt address.
It is the port number of the candidate.
Valid values for transport is one of "udp"
and "tcp"
. Based
on the "transport"
defined in [[!RFC5245]] section 15.1.
This enumeration is defined in [[WEBRTC]].
Calculated as defined in [[!RFC5245]] section 15.1.
For local candidates of type {{RTCIceCandidateType/"srflx"}} or type {{RTCIceCandidateType/"relay"}} this is the URL of the ICE server from which the candidate was obtained and defined in [[WEBRTC]].
For remote candidates, this property MUST NOT be [= map/exist | present =].
It is the protocol used by the endpoint to communicate with the TURN server. This is only present for local relay candidates and defined in [[WEBRTC]].
For remote candidates, this property MUST NOT be [= map/exist | present =].
The ICE foundation as defined in [[!RFC5245]] section 15.1.
The ICE rel-addr defined in [[!RFC5245]] section 15.1. Only set for serverreflexive, peerreflexive and relay candidates.
The ICE rel-addr defined in [[!RFC5245]] section 15.1. Only set for serverreflexive, peerreflexive and relay candidates.
The ICE username fragment as defined in [[!RFC5245]] section 7.1.2.3.
The ICE candidate TCP type, as defined іn {{RTCIceTcpCandidateType}} and used in {{RTCIceCandidate}}.
dictionary RTCIceCandidatePairStats : RTCStats { required DOMString transportId; required DOMString localCandidateId; required DOMString remoteCandidateId; required RTCStatsIceCandidatePairState state; boolean nominated; unsigned long long packetsSent; unsigned long long packetsReceived; unsigned long long bytesSent; unsigned long long bytesReceived; DOMHighResTimeStamp lastPacketSentTimestamp; DOMHighResTimeStamp lastPacketReceivedTimestamp; double totalRoundTripTime; double currentRoundTripTime; double availableOutgoingBitrate; double availableIncomingBitrate; unsigned long long requestsReceived; unsigned long long requestsSent; unsigned long long responsesReceived; unsigned long long responsesSent; unsigned long long consentRequestsSent; unsigned long packetsDiscardedOnSend; unsigned long long bytesDiscardedOnSend; };
It is a unique identifier that is associated to the object that was inspected to produce the {{RTCTransportStats}} associated with this candidate pair.
It is a unique identifier that is associated to the object that was inspected to produce the {{RTCIceCandidateStats}} for the local candidate associated with this candidate pair.
It is a unique identifier that is associated to the object that was inspected to produce the {{RTCIceCandidateStats}} for the remote candidate associated with this candidate pair.
Represents the state of the checklist for the local and remote candidates in a pair.
Related to updating the nominated flag described in Section 7.1.3.2.4 of [[!RFC5245]].
Represents the total number of packets sent on this candidate pair.
Represents the total number of packets received on this candidate pair.
Represents the total number of payload bytes sent on this candidate pair, i.e., not including headers, padding or ICE connectivity checks.
Represents the total number of payload bytes received on this candidate pair, i.e., not including headers, padding or ICE connectivity checks.
Represents the timestamp at which the last packet was sent on this particular candidate pair, excluding STUN packets.
Represents the timestamp at which the last packet was received on this particular candidate pair, excluding STUN packets.
Represents the sum of all round trip time measurements in seconds since the beginning of the session, based on STUN connectivity check [[!STUN-PATH-CHAR]] responses (responsesReceived), including those that reply to requests that are sent in order to verify consent [[!RFC7675]]. The average round trip time can be computed from {{totalRoundTripTime}} by dividing it by {{responsesReceived}}.
Represents the latest round trip time measured in seconds, computed from both STUN connectivity checks [[!STUN-PATH-CHAR]], including those that are sent for consent verification [[!RFC7675]].
It is calculated by the underlying congestion control by combining the available bitrate for all the outgoing RTP streams using this candidate pair. The bitrate measurement does not count the size of the IP or other transport layers like TCP or UDP. It is similar to the TIAS defined in [[!RFC3890]], i.e., it is measured in bits per second and the bitrate is calculated over a 1 second window. For candidate pairs in use, the estimate is normally no lower than the bitrate for the packets sent at {{lastPacketSentTimestamp}}, but might be higher.
Only [= map/exist =]s when the underlying congestion control calculated either a send-side bandwidth estimation, for example using mechanisms such as TWCC, or received a receive-side estimation via RTCP, for example the one described in REMB. MUST NOT [= map/exist =] for candidate pairs that were never used for sending packets that were taken into account for bandwidth estimation or candidate pairs that have have been used previously but are not currently in use.
It is calculated by the underlying congestion control by combining the available bitrate for all the incoming RTP streams using this candidate pair. The bitrate measurement does not count the size of the IP or other transport layers like TCP or UDP. It is similar to the TIAS defined in [[!RFC3890]], i.e., it is measured in bits per second and the bitrate is calculated over a 1 second window. For pairs in use, the estimate is normally no lower than the bitrate for the packets received at {{lastPacketReceivedTimestamp}}, but might be higher.
Only [= map/exist =]s when a receive-side bandwidth estimation, for example REMB was calculated. MUST NOT [= map/exist =] for candidate pairs that were never used for receiving packets that were taken into account for bandwidth estimation or candidate pairs that have have been used previously but are not currently in use.
Represents the total number of connectivity check requests received (including retransmissions). It is impossible for the receiver to tell whether the request was sent in order to check connectivity or check consent, so all connectivity checks requests are counted here.
Represents the total number of connectivity check requests sent (not including retransmissions).
Represents the total number of connectivity check responses received.
Represents the total number of connectivity check responses sent. Since we cannot distinguish connectivity check requests and consent requests, all responses are counted.
Represents the total number of consent requests sent.
Total number of packets for this candidate pair that have been discarded due to socket errors, i.e. a socket error occurred when handing the packets to the socket. This might happen due to various reasons, including full buffer or no available memory.
Total number of bytes for this candidate pair that have been discarded due to socket errors, i.e. a socket error occurred when handing the packets containing the bytes to the socket. This might happen due to various reasons, including full buffer or no available memory. Calculated as defined in [[!RFC3550]] section 6.4.1.
enum RTCStatsIceCandidatePairState { "frozen", "waiting", "in-progress", "failed", "succeeded" };
Enum value | Description |
---|---|
frozen |
Defined in Section 5.7.4 of [[!RFC5245]]. |
waiting |
Defined in Section 5.7.4 of [[!RFC5245]]. |
in-progress |
Defined in Section 5.7.4 of [[!RFC5245]]. |
failed |
Defined in Section 5.7.4 of [[!RFC5245]]. |
succeeded |
Defined in Section 5.7.4 of [[!RFC5245]]. |
dictionary RTCCertificateStats : RTCStats { required DOMString fingerprint; required DOMString fingerprintAlgorithm; required DOMString base64Certificate; DOMString issuerCertificateId; };
The fingerprint of the certificate. Only use the fingerprint value as defined in Section 5 of [[!RFC4572]].
The hash function used to compute the certificate fingerprint. For instance,
"sha-256"
.
The DER-encoded base-64 representation of the certificate.
The {{issuerCertificateId}} refers to the stats object that contains the next certificate in the certificate chain. If the current certificate is at the end of the chain (i.e. a self-signed certificate), this will not be set.
Consider the case where the user is experiencing bad sound and the application wants to determine if the cause of it is packet loss. The following example code might be used:
var baselineReport, currentReport; var sender = pc.getSenders()[0]; sender.getStats().then(function (report) { baselineReport = report; }) .then(function() { return new Promise(function(resolve) { setTimeout(resolve, aBit); // ... wait a bit }); }) .then(function() { return sender.getStats(); }) .then(function (report) { currentReport = report; processStats(); }) .catch(function (error) { console.log(error.toString()); }); function processStats() { // compare the elements from the current report with the baseline for (let now of currentReport.values()) { if (now.type != "outbound-rtp") continue; // get the corresponding stats from the baseline report let base = baselineReport.get(now.id); if (base) { remoteNow = currentReport.get(now.remoteId); remoteBase = baselineReport.get(base.remoteId); var packetsSent = now.packetsSent - base.packetsSent; var packetsReceived = remoteNow.packetsReceived - remoteBase.packetsReceived; // if intervalFractionLoss is > 0.3, we've probably found the culprit var intervalFractionLoss = (packetsSent - packetsReceived) / packetsSent; } }); }
The data exposed by WebRTC Statistics include most of the media and network data also exposed by [[!GETUSERMEDIA]] and [[!WEBRTC]] - as such, all the privacy and security considerations of these specifications related to data exposure apply as well to this specifciation.
In addition, the properties exposed by {{RTCReceivedRtpStreamStats}}, {{RTCRemoteInboundRtpStreamStats}}, {{RTCSentRtpStreamStats}}, {{RTCOutboundRtpStreamStats}}, {{RTCRemoteOutboundRtpStreamStats}}, {{RTCIceCandidatePairStats}}, {{RTCTransportStats}} expose network-layer data not currently available to the JavaScript layer.
Beyond the risks associated with revealing IP addresses as discussed in the WebRTC 1.0 specification, some combination of the network properties uniquely exposed by this specification can be correlated with location.
For instance, the round-trip time exposed in {{RTCRemoteInboundRtpStreamStats}} can give some coarse indication on how far aparts the peers are located, and thus, if one of the peer's location is known, this may reveal information about the other peer.
When applied to isolated streams, media metrics may allow an application to infer some characteristics of the isolated stream, such as if anyone is speaking (by watching the {{RTCAudioSourceStats/audioLevel}} statistic).
The following stats are deemed to be sensitive, and MUST NOT be reported for an isolated media stream:
{{RTCStatsType}} | Dictionary | Fields |
---|---|---|
{{RTCStatsType/"codec"}} | {{RTCCodecStats}} | |
{{RTCStatsType/"inbound-rtp"}} | {{RTCInboundRtpStreamStats}} | |
{{RTCStatsType/"outbound-rtp"}} | {{RTCOutboundRtpStreamStats}} | |
{{RTCStatsType/"remote-inbound-rtp"}} | {{RTCRemoteInboundRtpStreamStats}} | |
{{RTCStatsType/"remote-outbound-rtp"}} | {{RTCRemoteOutboundRtpStreamStats}} | |
{{RTCStatsType/"media-source"}} | {{RTCAudioSourceStats}} {{RTCVideoSourceStats}} | |
{{RTCStatsType/"media-playout"}} | {{RTCAudioPlayoutStats}} | |
{{RTCStatsType/"peer-connection"}} | {{RTCPeerConnectionStats}} | |
{{RTCStatsType/"data-channel"}} | {{RTCDataChannelStats}} | |
{{RTCStatsType/"transport"}} | {{RTCTransportStats}} | |
{{RTCStatsType/"candidate-pair"}} | {{RTCIceCandidatePairStats}} | |
{{RTCStatsType/"local-candidate"}} | {{RTCIceCandidateStats}} | |
{{RTCStatsType/"remote-candidate"}} | {{RTCIceCandidateStats}} | |
{{RTCStatsType/"certificate"}} | {{RTCCertificateStats}} |
The editors wish to thank the Working Group chairs, Stefan Håkansson, and the Team Contact, Dominique Hazaël-Massieux, for their support. The editors would like to thank Bernard Aboba, Taylor Brandstetter, Henrik Boström, Jan-Ivar Bruaroey, Karthik Budigere, Cullen Jennings, and Lennart Schulte for their contributions to this specification.