WebRTC Working Principles
Basic Architecture
- Application Layer: The layer where developers interact directly, using JavaScript APIs such as
getUserMediafor accessing media devices,RTCPeerConnectionfor establishing P2P connections, andRTCDataChannelfor transmitting arbitrary data. - Browser API Layer: JavaScript APIs provided by browser vendors, encapsulating the underlying C++ implementation, enabling developers to achieve complex real-time communication functionality through simple JavaScript calls.
- Core Functionality Layer: Includes critical components like media processing, network transmission, and security encryption, such as audio and video encoding/decoding, ICE (Interactive Connectivity Establishment) protocol for NAT traversal, and DTLS/SRTP for secure transmission.
- Network Transport Layer: WebRTC primarily uses UDP as the transport protocol, with RTP (Real-time Transport Protocol) and RTCP (Real-time Transport Control Protocol) for transmitting media data and control information.
Working Principles
Media Capture
Applications request access to the user’s camera and microphone through navigator.mediaDevices.getUserMedia, obtaining audio and video data streams.
WebRTC applications first need to access audio and video input devices, such as cameras and microphones. This is achieved using the navigator.mediaDevices.getUserMedia() API, which, after obtaining user permission, returns a MediaStream object containing audio and/or video streams from the device.
Code Example:
navigator.mediaDevices.getUserMedia({ video: true, audio: true })
.then(stream => {
// Use the stream for subsequent operations, such as previewing or sending to RTCPeerConnection
})
.catch(error => {
console.error('Error accessing media devices.', error);
});Signaling
WebRTC does not provide a signaling service; developers must implement it themselves. Signaling handles the exchange of session information between peers, including SDP (Session Description Protocol) descriptions and ICE candidate information, typically via WebSocket, HTTP, or similar protocols.
Signaling is essential for establishing and managing WebRTC connections, including exchanging session descriptions and candidate information. It can be implemented using technologies like WebSocket, XHR, or even WebSockets over RTCDataChannel. The signaling process typically includes:
- Discovery and exchange of network information (ICE candidates).
- Exchange of session descriptions (SDP) containing media types, codec parameters, etc.
Practical Example: Assume WebSocket is used as the signaling channel, where two users exchange necessary information through a server to establish a connection.
Connection Establishment
RTCPeerConnection is the core of WebRTC, responsible for establishing and maintaining P2P connections. During connection establishment, peers first exchange SDP offer and answer via signaling to describe local and remote media configurations. Then, both parties use the ICE framework to discover and confirm the optimal network path, which may be direct P2P or relayed through a TURN server.
WebRTC Connection Establishment Follows a Strict Step-by-Step Process:
- Media Negotiation Phase
- Obtain local media stream via
getUserMediaAPI. - Create an
RTCPeerConnectionobject. - Generate a local SDP offer (including media types, codec preferences, etc.).
- Obtain local media stream via
- Network Negotiation Phase
- Collect ICE candidate addresses (local, STUN-reflected, TURN-relayed).
- Exchange SDP and ICE candidates via the signaling channel.
- Perform ICE connectivity checks.
- Security Establishment Phase
- Complete DTLS handshake (identity verification and key exchange).
- Establish SRTP encrypted channel.
- Begin media transmission.
Key Features: The entire process uses asynchronous callbacks and requires handling various network exceptions.
Media Transmission and Processing
Once the connection is established, media data is encapsulated and transmitted using RTP and RTCP protocols. To adapt to network conditions, WebRTC supports dynamic adjustments to encoding bitrate and resolution, ensuring a smooth communication experience. DTLS and SRTP are used to ensure data security and privacy.
RTCPeerConnection is the core component, managing direct connections between browsers, handling real-time audio and video transmission, and monitoring/adapting to network conditions. After creating an RTCPeerConnection instance, the local media stream must be added, and remote streams are handled accordingly.
Code Example:
const pc = new RTCPeerConnection();
pc.addStream(localStream); // Add local media stream to RTCPeerConnectionMedia Encoding/Decoding
WebRTC uses RTP (Real-time Transport Protocol) to transmit audio and video data over P2P connections. To accommodate varying network environments and device capabilities, WebRTC supports multiple audio and video codecs, such as VP8, VP9, H.264 (video) and Opus, G.711 (audio). Codec selection and configuration are critical for ensuring communication quality and efficiency.
Code Example (while WebRTC APIs do not directly expose codec settings, they can be indirectly influenced through SDP offer/answer negotiation):
// Codec preferences can be specified via constraints when creating RTCPeerConnection
const pc = new RTCPeerConnection({
sdpSemantics: 'unified-plan',
codecPreferences: [...], // Depends on browser support; may not be directly configurable
});Quality Monitoring and Feedback
The RTCP protocol provides quality feedback for media streams, such as packet loss rate and latency. WebRTC uses this information for bandwidth estimation and congestion control, dynamically adjusting transmission strategies.
Data Channel
In addition to audio and video communication, WebRTC supports transmitting arbitrary data via RTCDataChannel, suitable for scenarios like text chat and file transfer, further expanding the scope of real-time communication applications.
NAT Traversal Core Technologies
WebRTC’s NAT traversal solutions include:
- STUN Protocol
- Obtains the device’s public mapped address.
- Detects NAT types (e.g., full cone, symmetric).
- Features: Lightweight and fast response.
- TURN Protocol
- Relays data as a server.
- Supports TCP/TLS/UDP transmission modes.
- Use Case: Strict symmetric NAT environments.
- ICE Framework
- Combines multiple candidate addresses for connectivity testing.
- Selects the optimal path.
- Includes: Host candidates, reflexive candidates, relay candidates.
Special Handling: For symmetric NATs, hole-punching techniques are used to establish direct connections via pre-exchanged permission information.
Real-Time Adaptation and Optimization
- Network Adaptability: WebRTC has robust network adaptation capabilities, dynamically adjusting encoding parameters and transmission strategies based on current network conditions. For example, during network congestion, it reduces video resolution or frame rate to maintain smoothness at the cost of some quality. Conversely, it enhances transmission quality when conditions improve.
- Bandwidth Estimation: Using RTCP Receiver Reports (RR) and Sender Reports (SR), WebRTC estimates available bandwidth and adjusts sending rates to avoid packet loss and latency due to network overload.
- Congestion Control: WebRTC employs algorithms like Google’s Congestion Control for WebRTC (GCC), which adjusts sending rates based on latency and packet loss to ensure stable communication across various network conditions.
Audio-Video Synchronization
In multi-stream scenarios (especially with video and audio streams), synchronization is critical for user experience. WebRTC uses timestamps to ensure audio and video frames are played in the correct order and timing. RTCP timestamps and sequence numbers help the receiver resynchronize streams, even when network fluctuations cause packets to arrive out of order.
Low Latency and Instant Communication
WebRTC is designed with low latency in mind, crucial for real-time interactive applications. Through direct P2P connections, efficient codecs (e.g., VP8, VP9), and minimal intermediate processing, WebRTC delivers near-instant communication, ideal for gaming, remote surgery, and real-time collaboration scenarios with stringent latency requirements.
QoS Assurance Mechanisms
Key technologies for high-quality WebRTC communication:
- Bandwidth Estimation
- Based on REMB (Receiver Estimated Maximum Bitrate) algorithm.
- Dynamically adjusts video bitrate.
- Prioritizes audio bandwidth.
- Packet Loss Resistance
- FEC (Forward Error Correction) technology.
- NACK (Negative Acknowledgment) retransmission mechanism.
- Adaptive jitter buffer.
- Latency Optimization
- Prioritizes low-latency paths.
- Dynamically adjusts frame rate and resolution.
- Intelligent frame-dropping strategy.
Test Data: Under good network conditions, end-to-end latency can be controlled within 200ms.
Security Mechanisms in Detail
WebRTC employs multi-layered security protections:
- Transport Encryption
- DTLS (Datagram Transport Layer Security) for key exchange.
- SRTP (Secure Real-time Transport Protocol) for media encryption.
- Key material exchanged via DTLS-SRTP extension.
- Identity Verification
- Mutual certificate-based authentication.
- Prevents man-in-the-middle attacks.
- Supports fingerprint verification.
- Permission Control
- Users must explicitly authorize media access.
- Browsers isolate media streams across tabs.
- Automatically releases idle resources.
Security Standards: Fully compliant with IETF security specifications.
Future Development Trends
- AV1 Codec: With the maturity of the AV1 codec, its higher compression efficiency and open-source nature may lead to broader adoption in WebRTC applications, improving video quality or reducing bandwidth consumption at the same quality level.
- WebTransport: As part of the WebRTC technology stack, WebTransport aims to provide more flexible, low-latency data transmission, potentially introducing new use cases and performance improvements for WebRTC.
- Augmented and Virtual Reality: With the rise of AR/VR, WebRTC shows significant potential in real-time 3D spatial interactions and 360-degree video transmission, advancing immersive internet experiences.
- Machine Learning and AI Integration: Integrating machine learning technologies, such as intelligent noise reduction, background replacement, and emotion recognition, will further enrich WebRTC application functionality and enhance user experience.
WebRTC Basic APIs in Detail
Media Stream Acquisition and Processing
getUserMedia API (Camera and Microphone Access)
getUserMedia is the most fundamental WebRTC API, used to obtain user media device permissions:
// Basic usage
navigator.mediaDevices.getUserMedia({ video: true, audio: true })
.then(stream => {
// Successfully obtained media stream
console.log('Successfully obtained media stream:', stream);
})
.catch(err => {
// Handle errors
console.error('Failed to obtain media stream:', err);
});
// More detailed constraints
const constraints = {
video: {
width: { ideal: 1280 },
height: { ideal: 720 },
frameRate: { ideal: 30 }
},
audio: {
echoCancellation: true,
noiseSuppression: true,
sampleRate: 48000
}
};
navigator.mediaDevices.getUserMedia(constraints)
.then(stream => {
// Process media stream
});Constraint Details:
videoconstraints can set resolution, frame rate, etc.audioconstraints can enable noise suppression, echo cancellation, etc.idealspecifies preferred values, which browsers attempt to meet but do not guarantee.min/maxset minimum/maximum parameter values.
Media Stream (MediaStream) Acquisition and Control
The MediaStream object represents a media stream containing multiple media tracks:
// Obtain media stream
let stream;
navigator.mediaDevices.getUserMedia({ video: true, audio: true })
.then(s => {
stream = s;
// Assign stream to video element
document.getElementById('localVideo').srcObject = stream;
});
// Stop all tracks
function stopStream() {
if (stream) {
stream.getTracks().forEach(track => track.stop());
stream = null;
}
}
// Switch camera
async function switchCamera() {
if (!stream) return;
// Get all video devices
const devices = await navigator.mediaDevices.enumerateDevices();
const videoDevices = devices.filter(device => device.kind === 'videoinput');
if (videoDevices.length < 2) return;
// Find current device ID
const currentTrack = stream.getVideoTracks()[0];
const currentDeviceId = currentTrack.getSettings().deviceId;
// Find another device ID
const newDeviceId = videoDevices.find(d => d.deviceId !== currentDeviceId).deviceId;
// Re-obtain stream
const newStream = await navigator.mediaDevices.getUserMedia({
video: { deviceId: { exact: newDeviceId } },
audio: true
});
// Replace video track
const newVideoTrack = newStream.getVideoTracks()[0];
stream.getVideoTracks()[0].stop();
stream.addTrack(newVideoTrack);
}Media Track (MediaStreamTrack) Operations
Controlling individual media tracks:
// Mute/unmute
function toggleMute() {
const audioTrack = stream.getAudioTracks()[0];
if (audioTrack) {
audioTrack.enabled = !audioTrack.enabled;
console.log('Audio status:', audioTrack.enabled ? 'On' : 'Muted');
}
}
// Disable/enable track
function toggleVideo() {
const videoTrack = stream.getVideoTracks()[0];
if (videoTrack) {
videoTrack.enabled = !videoTrack.enabled;
console.log('Video status:', videoTrack.enabled ? 'On' : 'Disabled');
}
}
// Get track settings
function logTrackSettings() {
stream.getTracks().forEach(track => {
console.log('Track settings:', track.getSettings());
});
}Video and Audio Constraints
Detailed media constraint configurations:
// Advanced video constraints
const advancedVideoConstraints = {
width: { min: 640, ideal: 1280, max: 1920 },
height: { min: 480, ideal: 720, max: 1080 },
frameRate: { min: 15, ideal: 30, max: 60 },
facingMode: 'user' // 'user' for front camera, 'environment' for rear camera
};
// Advanced audio constraints
const advancedAudioConstraints = {
echoCancellation: { exact: true },
noiseSuppression: { exact: true },
autoGainControl: { exact: true },
sampleRate: { ideal: 48000 },
sampleSize: { ideal: 16 },
channelCount: { ideal: 2 }
};
// Combined constraints
const combinedConstraints = {
video: advancedVideoConstraints,
audio: advancedAudioConstraints
};Media Stream Display
Displaying media streams in HTML elements:
<video id="localVideo" autoplay playsinline muted></video>
<video id="remoteVideo" autoplay playsinline></video>
<script>
// Display local stream
navigator.mediaDevices.getUserMedia({ video: true, audio: true })
.then(stream => {
document.getElementById('localVideo').srcObject = stream;
});
// Display remote stream
function showRemoteStream(remoteStream) {
document.getElementById('remoteVideo').srcObject = remoteStream;
}
</script>Important Attribute Details:
autoplay: Automatically plays media.playsinline: Enables inline playback on iOS (non-fullscreen).muted: Mutes the stream (required by some browsers for autoplay).
Peer-to-Peer Connection Establishment
RTCPeerConnection API Overview
RTCPeerConnection is the core class for managing peer-to-peer connections:
// Create connection
const pc = new RTCPeerConnection({
iceServers: [
{ urls: 'stun:stun.l.google.com:19302' },
// Optional TURN server
// { urls: 'turn:your-turn-server.com', username: 'user', credential: 'pass' }
]
});
// Monitor connection state changes
pc.oniceconnectionstatechange = () => {
console.log('ICE connection state:', pc.iceConnectionState);
};
pc.onconnectionstatechange = () => {
console.log('Connection state:', pc.connectionState);
};
pc.onsignalingstatechange = () => {
console.log('Signaling state:', pc.signalingState);
};Local and Remote Descriptions (SDP)
SDP (Session Description Protocol) exchange process:
// Create offer
async function createOffer() {
try {
const offer = await pc.createOffer({
offerToReceiveAudio: true,
offerToReceiveVideo: true
});
// Set local description
await pc.setLocalDescription(offer);
// Send offer to peer via signaling server
sendToPeer({
type: 'offer',
sdp: pc.localDescription
});
} catch (err) {
console.error('Failed to create offer:', err);
}
}
// Handle received offer
async function handleOffer(offer) {
try {
// Set remote description
await pc.setRemoteDescription(new RTCSessionDescription(offer));
// Create answer
const answer = await pc.createAnswer();
// Set local description
await pc.setLocalDescription(answer);
// Send answer to peer
sendToPeer({
type: 'answer',
sdp: pc.localDescription
});
} catch (err) {
console.error('Failed to handle offer:', err);
}
}
// Handle received answer
async function handleAnswer(answer) {
try {
// Set remote description
await pc.setRemoteDescription(new RTCSessionDescription(answer));
} catch (err) {
console.error('Failed to handle answer:', err);
}
}ICE Candidate Collection and Exchange
ICE candidate collection and exchange process:
// Monitor ICE candidate collection
pc.onicecandidate = event => {
if (event.candidate) {
// Send ICE candidate to peer via signaling server
sendToPeer({
type: 'ice-candidate',
candidate: event.candidate
});
} else {
// All candidates collected
console.log('ICE candidate collection completed');
}
};
// Handle received ICE candidate
async function handleIceCandidate(candidate) {
try {
await pc.addIceCandidate(new RTCIceCandidate(candidate));
} catch (err) {
console.error('Failed to add ICE candidate:', err);
}
}
// ICE restart (when connection is lost)
function restartIce() {
pc.restartIce();
}Connection State Monitoring
Connection state monitoring implementation:
// Connection state monitoring
function monitorConnection() {
// ICE connection state
pc.oniceconnectionstatechange = () => {
const state = pc.iceConnectionState;
console.log('ICE connection state:', state);
switch (state) {
case 'connected':
// Connection established
break;
case 'disconnected':
// Connection lost (may recover)
break;
case 'failed':
// Connection failed (requires restart)
restartIce();
break;
case 'closed':
// Connection closed
break;
}
};
// Signaling state
pc.onsignalingstatechange = () => {
const state = pc.signalingState;
console.log('Signaling state:', state);
switch (state) {
case 'stable':
// Signaling state stable
break;
case 'have-local-offer':
// Local offer created
break;
case 'have-remote-offer':
// Remote offer received
break;
case 'closed':
// Signaling channel closed
break;
}
};
// Connection state (higher-level state)
pc.onconnectionstatechange = () => {
const state = pc.connectionState;
console.log('Connection state:', state);
switch (state) {
case 'connected':
// Connection established
break;
case 'disconnected':
// Connection lost
break;
case 'failed':
// Connection failed
break;
case 'closed':
// Connection closed
break;
}
};
}Error Handling and Reconnection Mechanism
Robust error handling and reconnection implementation:
// Error handling
pc.onerror = error => {
console.error('RTCPeerConnection error:', error);
};
// Reconnection mechanism
let reconnectAttempts = 0;
const maxReconnectAttempts = 3;
function handleConnectionFailure() {
if (reconnectAttempts >= maxReconnectAttempts) {
console.error('Maximum reconnection attempts reached');
return;
}
reconnectAttempts++;
console.log(`Attempting reconnection (${reconnectAttempts}/${maxReconnectAttempts})`);
setTimeout(() => {
restartConnection();
}, 2000 * reconnectAttempts); // Exponential backoff
}
function restartConnection() {
// Save current media stream
const savedStream = stream;
// Close old connection
if (pc) {
pc.close();
}
// Create new connection
pc = new RTCPeerConnection({
iceServers: [/* Same ICE server configuration */]
});
// Re-set all event listeners
monitorConnection();
// Re-add media tracks
if (savedStream) {
savedStream.getTracks().forEach(track => {
pc.addTrack(track, savedStream);
});
}
// Re-initiate connection process
if (isInitiator) {
createOffer();
} else {
// Wait to receive offer
}
}Data Transmission
RTCDataChannel API Overview
RTCDataChannel is used for transmitting arbitrary data between peers:
// Create data channel
const dataChannel = pc.createDataChannel('chat', {
ordered: true, // Ensure message order
maxPacketLifeTime: 3000, // Maximum message lifetime (ms)
maxRetransmits: 3, // Maximum retransmission attempts
protocol: 'sctp', // Protocol used
negotiated: false, // Whether negotiated by application layer
id: undefined // Channel ID (auto-assigned by browser)
});
// Monitor channel state
dataChannel.onopen = () => {
console.log('Data channel opened');
};
dataChannel.onclose = () => {
console.log('Data channel closed');
};
dataChannel.onerror = error => {
console.error('Data channel error:', error);
};
dataChannel.onmessage = event => {
console.log('Received message:', event.data);
};Data Channel Creation and Configuration
Detailed data channel configuration options:
// Reliable transmission (default)
const reliableChannel = pc.createDataChannel('reliable', {
ordered: true,
maxRetransmits: 0 // 0 means unlimited retransmissions (fully reliable)
});
// Partially reliable transmission (based on message count)
const partialReliableChannel1 = pc.createDataChannel('partial1', {
ordered: true,
maxRetransmits: 3 // Maximum 3 retransmissions
});
// Partially reliable transmission (based on time)
const partialReliableChannel2 = pc.createDataChannel('partial2', {
ordered: true,
maxPacketLifeTime: 2000 // Abandon retransmission after 2 seconds
});
// Unreliable transmission (no order or reliability guarantees)
const unreliableChannel = pc.createDataChannel('unreliable', {
ordered: false,
maxRetransmits: 0
});Data Sending and Receiving
Basic data transmission operations:
// Send text message
function sendTextMessage(text) {
if (dataChannel.readyState === 'open') {
dataChannel.send(text);
} else {
console.error('Data channel not open');
}
}
// Send JSON data
function sendJsonData(data) {
if (dataChannel.readyState === 'open') {
dataChannel.send(JSON.stringify(data));
}
}
// Send binary data (Blob)
function sendBlobData(blob) {
if (dataChannel.readyState === 'open') {
dataChannel.send(blob);
}
}
// Send ArrayBuffer
function sendArrayBuffer(buffer) {
if (dataChannel.readyState === 'open') {
dataChannel.send(buffer);
}
}
// Handle received messages
dataChannel.onmessage = event => {
if (typeof event.data === 'string') {
// Text message
console.log('Received text message:', event.data);
} else if (event.data instanceof Blob) {
// Blob data
const reader = new FileReader();
reader.onload = () => {
console.log('Received Blob data:', reader.result);
};
reader.readAsArrayBuffer(event.data);
} else if (event.data instanceof ArrayBuffer) {
// ArrayBuffer data
console.log('Received ArrayBuffer data:', event.data);
}
};Data Channel State Management
Data channel state monitoring:
// State monitoring
function monitorDataChannel(channel) {
console.log('Initial state:', channel.readyState);
channel.onopen = () => {
console.log('Channel opened');
};
channel.onclose = () => {
console.log('Channel closed');
};
// Periodically check state
setInterval(() => {
console.log('Current state:', channel.readyState);
}, 5000);
}
// Close data channel
function closeDataChannel(channel) {
if (channel.readyState === 'open') {
channel.close();
}
}
// Recreate data channel
function recreateDataChannel(pc, label, options) {
const newChannel = pc.createDataChannel(label, options);
monitorDataChannel(newChannel);
return newChannel;
}Data Transmission Security
WebRTC data transmission security mechanisms:
- DTLS Encryption:
- All data channels are encrypted with DTLS.
- Keys are negotiated via DTLS handshake.
- Prevents man-in-the-middle attacks.
- SRTP Encryption:
- Media streams are encrypted with SRTP.
- Shares keys with DTLS.
- Identity Verification:
- Certificate verification.
- Prevents identity spoofing.
- Integrity Protection:
- All data includes integrity checks.
- Prevents data tampering.
Security Considerations:
- Always deploy WebRTC applications over HTTPS/WSS.
- Verify fingerprint information in SDP.
- Implement proper error handling.
- Consider adding application-layer encryption (if end-to-end encryption is required).
Comprehensive Example
Complete One-to-One Communication Implementation
<!DOCTYPE html>
<html>
<head>
<title>WebRTC Example</title>
<style>
video { width: 320px; height: 240px; border: 1px solid #ccc; }
.container { display: flex; }
</style>
</head>
<body>
<h1>WebRTC One-to-One Communication</h1>
<div class="container">
<div>
<h3>Local Video</h3>
<video id="localVideo" autoplay playsinline muted></video>
</div>
<div>
<h3>Remote Video</h3>
<video id="remoteVideo" autoplay playsinline></video>
</div>
</div>
<div>
<button id="startButton">Start</button>
<button id="callButton" disabled>Call</button>
<button id="hangupButton" disabled>Hang Up</button>
</div>
<script>
// Global variables
let localStream;
let pc;
const startButton = document.getElementById('startButton');
const callButton = document.getElementById('callButton');
const hangupButton = document.getElementById('hangupButton');
const localVideo = document.getElementById('localVideo');
const remoteVideo = document.getElementById('remoteVideo');
// Signaling channel (simulated)
const signalingChannel = {
send: data => {
// In real applications, this should send data to the peer via WebSocket, etc.
console.log('Sending signaling:', data);
// Simulate receiving (in real applications, this would occur in another browser instance)
setTimeout(() => onSignalingMessage(data), 100);
},
onmessage: null
};
// Set signaling message handler
signalingChannel.onmessage = onSignalingMessage;
// Start button click event
startButton.addEventListener('click', async () => {
try {
localStream = await navigator.mediaDevices.getUserMedia({
video: true,
audio: true
});
localVideo.srcObject = localStream;
callButton.disabled = false;
startButton.disabled = true;
} catch (err) {
console.error('Failed to obtain media stream:', err);
}
});
// Call button click event
callButton.addEventListener('click', async () => {
try {
// Create RTCPeerConnection
pc = new RTCPeerConnection({
iceServers: [
{ urls: 'stun:stun.l.google.com:19302' }
]
});
// Add local stream
localStream.getTracks().forEach(track => {
pc.addTrack(track, localStream);
});
// Monitor ICE candidates
pc.onicecandidate = event => {
if (event.candidate) {
signalingChannel.send({
type: 'ice-candidate',
candidate: event.candidate
});
}
};
// Monitor remote stream
pc.ontrack = event => {
remoteVideo.srcObject = event.streams[0];
};
// Create offer
const offer = await pc.createOffer();
await pc.setLocalDescription(offer);
// Send offer
signalingChannel.send({
type: 'offer',
sdp: pc.localDescription
});
callButton.disabled = true;
hangupButton.disabled = false;
} catch (err) {
console.error('Failed to create call:', err);
}
});
// Hang-up button click event
hangupButton.addEventListener('click', () => {
if (pc) {
pc.close();
pc = null;
}
if (localStream) {
localStream.getTracks().forEach(track => track.stop());
localStream = null;
localVideo.srcObject = null;
remoteVideo.srcObject = null;
}
callButton.disabled = false;
hangupButton.disabled = true;
});
// Handle signaling messages
async function onSignalingMessage(message) {
if (!pc) {
// If initiator, pc is already created
// If receiver, create pc
if (message.type === 'offer') {
pc = new RTCPeerConnection({
iceServers: [
{ urls: 'stun:stun.l.google.com:19302' }
]
});
// Add local stream
if (localStream) {
localStream.getTracks().forEach(track => {
pc.addTrack(track, localStream);
});
}
// Monitor ICE candidates
pc.onicecandidate = event => {
if (event.candidate) {
signalingChannel.send({
type: 'ice-candidate',
candidate: event.candidate
});
}
};
// Monitor remote stream
pc.ontrack = event => {
remoteVideo.srcObject = event.streams[0];
};
// Set remote description
await pc.setRemoteDescription(new RTCSessionDescription(message));
// Create answer
const answer = await pc.createAnswer();
await pc.setLocalDescription(answer);
// Send answer
signalingChannel.send({
type: 'answer',
sdp: pc.localDescription
});
}
} else {
switch (message.type) {
case 'answer':
await pc.setRemoteDescription(new RTCSessionDescription(message));
break;
case 'ice-candidate':
await pc.addIceCandidate(new RTCIceCandidate(message.candidate));
break;
}
}
}
// Simulate receiving signaling (in real applications, this would occur in another browser instance)
signalingChannel.send = data => {
console.log('Sending signaling:', data);
setTimeout(() => {
// Simulate peer receiving and processing
if (data.type === 'offer') {
// Simulate receiver creating pc and handling offer
const simulatedPc = {
setRemoteDescription: async sdp => {
console.log('Simulated receiver setting remote description');
// Simulate creating answer
setTimeout(() => {
const simulatedAnswer = {
type: 'answer',
sdp: { /* Simulated SDP */ }
};
signalingChannel.onmessage({
type: 'answer',
sdp: simulatedAnswer.sdp
});
// Simulate ICE candidate exchange
setTimeout(() => {
signalingChannel.onmessage({
type: 'ice-candidate',
candidate: { /* Simulated ICE candidate */ }
});
}, 100);
}, 100);
},
addIceCandidate: async candidate => {
console.log('Simulated receiver adding ICE candidate');
}
};
// Handle offer
simulatedPc.setRemoteDescription(new RTCSessionDescription(data));
} else if (data.type === 'ice-candidate') {
// Simulate receiving ICE candidate
console.log('Simulated receiving ICE candidate');
}
}, 100);
};
</script>
</body>
</html>This complete example demonstrates the core functionality of WebRTC, including:
- Media stream acquisition and display
- Peer-to-peer connection establishment
- SDP exchange
- ICE candidate exchange
- Media transmission
In real-world applications, you should:
- Replace the simulated signaling channel with a real signaling server.
- Add error handling and reconnection mechanisms.
- Consider adding more media constraints and configuration options.
- Implement a more robust UI and user experience.



