WebGL Basics
WebGL is a JavaScript API based on OpenGL ES 2.0 that enables hardware-accelerated 3D graphics rendering directly in web browsers. WebGL allows complex 3D and 2D graphics to be drawn onto an HTML <canvas> element, facilitating cross-platform interactive 3D graphics applications without the need for plugins.
WebGL Overview
- Core Concept: WebGL leverages the GPU (Graphics Processing Unit) to render graphics, enabling it to handle complex computations such as lighting, shadows, and texture mapping.
- Shading Language: WebGL uses GLSL (OpenGL Shading Language) to write vertex shaders and fragment shaders, which compute vertex positions and pixel colors, respectively.
- Graphics Pipeline: WebGL follows a traditional graphics rendering pipeline, including stages like vertex processing, primitive assembly, and rasterization.
- Context Acquisition: The WebGL rendering context is obtained via
canvas.getContext('webgl')orcanvas.getContext('experimental-webgl').
Setting Up the WebGL Environment
1. HTML Preparation
Create a <canvas> element, which serves as the drawing surface for WebGL graphics.
<canvas id="webgl-canvas"></canvas>2. Acquiring the WebGL Context
In JavaScript, retrieve the canvas element and initialize the WebGL context.
const canvas = document.getElementById('webgl-canvas');
const gl = canvas.getContext('webgl');
if (!gl) {
alert('WebGL not supported');
}3. Setting the Viewport and Clearing the Color
Configure the WebGL viewport size and clear the color buffer.
gl.viewport(0, 0, canvas.width, canvas.height);
gl.clearColor(0.0, 0.0, 0.0, 1.0); // Set background color to black
gl.clear(gl.COLOR_BUFFER_BIT); // Clear the color bufferWebGL Code Example
// Acquire WebGL context
const canvas = document.getElementById('webgl');
const gl = canvas.getContext('webgl');
// Set clear color to black
gl.clearColor(0.0, 0.0, 0.0, 1.0);
gl.clear(gl.COLOR_BUFFER_BIT);
// Define vertex and fragment shaders
const vertexShaderSource = `
attribute vec2 position;
void main() {
gl_Position = vec4(position, 0.0, 1.0);
}
`;
const fragmentShaderSource = `
precision mediump float;
uniform vec4 u_FragColor;
void main() {
gl_FragColor = u_FragColor;
}
`;
// Compile shaders
function compileShader(type, source) {
const shader = gl.createShader(type);
gl.shaderSource(shader, source);
gl.compileShader(shader);
if (!gl.getShaderParameter(shader, gl.COMPILE_STATUS)) {
console.error('Shader compile error:', gl.getShaderInfoLog(shader));
gl.deleteShader(shader);
return null;
}
return shader;
}
const vertexShader = compileShader(gl.VERTEX_SHADER, vertexShaderSource);
const fragmentShader = compileShader(gl.FRAGMENT_SHADER, fragmentShaderSource);
// Create and link program
const program = gl.createProgram();
gl.attachShader(program, vertexShader);
gl.attachShader(program, fragmentShader);
gl.linkProgram(program);
if (!gl.getProgramParameter(program, gl.LINK_STATUS)) {
console.error('Program link error:', gl.getProgramInfoLog(program));
return;
}
gl.useProgram(program);
// Set vertex positions
const positionAttributeLocation = gl.getAttribLocation(program, 'position');
gl.enableVertexAttribArray(positionAttributeLocation);
const positionBuffer = gl.createBuffer();
gl.bindBuffer(gl.ARRAY_BUFFER, positionBuffer);
const positions = [0, 0];
gl.bufferData(gl.ARRAY_BUFFER, new Float32Array(positions), gl.STATIC_DRAW);
gl.vertexAttribPointer(positionAttributeLocation, 2, gl.FLOAT, false, 0, 0);
// Set color and draw
const colorLocation = gl.getUniformLocation(program, 'u_FragColor');
gl.uniform4f(colorLocation, 1.0, 0.0, 0.0, 1.0); // Red
gl.drawArrays(gl.POINTS, 0, 1); // Draw a single pointThis code first sets up the WebGL context and defines two shaders: a vertex shader to determine vertex positions and a fragment shader to set the color of each fragment (pixel). It then compiles and links these shaders, and uses a vertex position buffer to draw a red point at the center of the canvas.
WebGL Rendering and Creating 2D Content
WebGL is a powerful tool for creating complex 3D graphics in web browsers. While it is designed primarily for 3D rendering, it can also be used to create 2D content. By cleverly utilizing geometry, projection, and textures, we can build a variety of 2D graphics.
Creating the Canvas
First, we need to set up a <canvas> element in HTML to host the WebGL rendering content:
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<title>WebGL 2D Example</title>
</head>
<body>
<canvas id="webgl-canvas" width="600" height="400"></canvas>
<script src="main.js"></script>
</body>
</html>2D Rendering
// Get canvas element and WebGL context
const canvas = document.getElementById('webgl-canvas');
const gl = canvas.getContext('webgl', { antialias: false });
// Set viewport size and clear color
gl.viewport(0, 0, canvas.width, canvas.height);
gl.clearColor(1, 1, 1, 1); // White background
gl.clear(gl.COLOR_BUFFER_BIT);
// Define 2D coordinate transformation matrix
const projectionMatrix = mat3.create();
mat3.identity(projectionMatrix);
mat3.translate(projectionMatrix, projectionMatrix, [-1, -1, 0]);
mat3.scale(projectionMatrix, projectionMatrix, [2 / canvas.width, 2 / canvas.height, 1]);
// Create vertex and fragment shaders
const vertexShaderSource = `
attribute vec2 position;
uniform mat3 projectionMatrix;
void main() {
gl_Position = vec4((projectionMatrix * vec3(position, 1)).xy, 0, 1);
}
`;
const fragmentShaderSource = `
precision mediump float;
uniform vec4 color;
void main() {
gl_FragColor = color;
}
`;
// Compile and link shaders
const vertexShader = createShader(gl, gl.VERTEX_SHADER, vertexShaderSource);
const fragmentShader = createShader(gl, gl.FRAGMENT_SHADER, fragmentShaderSource);
const program = createProgram(gl, vertexShader, fragmentShader);
// Get attribute and uniform locations
const positionAttributeLocation = gl.getAttribLocation(program, 'position');
const colorUniformLocation = gl.getUniformLocation(program, 'color');
// Create vertex buffer and set vertex data
const vertices = [
-1, -1, // Bottom-left
1, -1, // Bottom-right
-1, 1, // Top-left
1, 1, // Top-right
];
const positionBuffer = createBuffer(gl, vertices);
// Draw rectangle
function drawRectangle(color) {
gl.useProgram(program);
gl.uniform4fv(colorUniformLocation, color);
gl.bindBuffer(gl.ARRAY_BUFFER, positionBuffer);
gl.enableVertexAttribArray(positionAttributeLocation);
gl.vertexAttribPointer(positionAttributeLocation, 2, gl.FLOAT, false, 0, 0);
gl.drawArrays(gl.TRIANGLE_STRIP, 0, vertices.length / 2);
}
// Initialize drawing
drawRectangle([0, 0, 255, 255]); // Blue rectangle
// Animation loop
function animate() {
requestAnimationFrame(animate);
// Update color
let t = performance.now() / 1000;
let r = Math.sin(t) * 128 + 128;
let g = Math.cos(t) * 128 + 128;
let b = 255;
drawRectangle([r / 255, g / 255, b / 255, 1]);
// Render to screen
gl.flush();
}
animate();This code sets up a 2D projection matrix to map the [-1, 1] coordinate range to the canvas’s [0, width] and [0, height]. It then creates vertex and fragment shaders to handle vertex positions and colors. A vertex buffer stores the four vertices of a rectangle, and the drawRectangle function draws it. The animation loop continuously changes the rectangle’s color over time.
The 2D rendering is achieved through coordinate transformations in the vertex shader, where the projectionMatrix adjusts vertex positions to fit the canvas’s 2D coordinate system. The fragment shader simply passes and outputs the color.
Modifying the Vertex Shader
Add rotation and scale uniforms:
const vertexShaderSource = `
attribute vec2 position;
uniform mat3 projectionMatrix;
uniform float rotation;
uniform vec2 scale;
void main() {
// Apply rotation
vec2 rotatedPosition = vec2(
position.x * cos(rotation) - position.y * sin(rotation),
position.x * sin(rotation) + position.y * cos(rotation)
);
// Apply scaling
vec2 scaledPosition = vec2(
rotatedPosition.x * scale.x,
rotatedPosition.y * scale.y
);
gl_Position = vec4((projectionMatrix * vec3(scaledPosition, 1)).xy, 0, 1);
}
`;
// Create and update uniforms
const rotationUniformLocation = gl.getUniformLocation(program, 'rotation');
const scaleUniformLocation = gl.getUniformLocation(program, 'scale');
// Modify drawRectangle to include rotation and scale
function drawRectangle(color, rotation, scale) {
gl.useProgram(program);
gl.uniform4fv(colorUniformLocation, color);
gl.uniform1f(rotationUniformLocation, rotation);
gl.uniform2fv(scaleUniformLocation, scale);
gl.bindBuffer(gl.ARRAY_BUFFER, positionBuffer);
gl.enableVertexAttribArray(positionAttributeLocation);
gl.vertexAttribPointer(positionAttributeLocation, 2, gl.FLOAT, false, 0, 0);
gl.drawArrays(gl.TRIANGLE_STRIP, 0, vertices.length / 2);
}
// Animation loop
function animate() {
requestAnimationFrame(animate);
// Update rotation and scale
let t = performance.now() / 1000;
let rotationValue = t * Math.PI * 2; // Rotation angle
let scaleX = 1 + Math.sin(t) * 0.2; // Scale factor X
let scaleY = 1 + Math.cos(t) * 0.2; // Scale factor Y
// Draw rotated and scaled rectangle
drawRectangle([0, 0, 255, 255], rotationValue, [scaleX, scaleY]);
// Render to screen
gl.flush();
}
animate();The rectangle now rotates and scales over time. The rotation uniform controls the rotation angle, and the scale uniform, a two-dimensional vector, controls the scaling factors for the X and Y axes.
Translation
Modify the drawRectangle function to accept a translation parameter and update translation values in the animation loop:
// Add translation uniform
const translationUniformLocation = gl.getUniformLocation(program, 'translation');
// Modify drawRectangle to include translation
function drawRectangle(color, rotation, scale, translation) {
gl.useProgram(program);
gl.uniform4fv(colorUniformLocation, color);
gl.uniform1f(rotationUniformLocation, rotation);
gl.uniform2fv(scaleUniformLocation, scale);
gl.uniform2fv(translationUniformLocation, translation);
gl.bindBuffer(gl.ARRAY_BUFFER, positionBuffer);
gl.enableVertexAttribArray(positionAttributeLocation);
gl.vertexAttribPointer(positionAttributeLocation, 2, gl.FLOAT, false, 0, 0);
gl.drawArrays(gl.TRIANGLE_STRIP, 0, vertices.length / 2);
}
// Animation loop
function animate() {
requestAnimationFrame(animate);
// Update rotation, scale, and translation
let t = performance.now() / 1000;
let rotationValue = t * Math.PI * 2;
let scaleX = 1 + Math.sin(t) * 0.2;
let scaleY = 1 + Math.cos(t) * 0.2;
let translationX = Math.sin(t) * canvas.width * 0.2;
let translationY = Math.cos(t) * canvas.height * 0.2;
// Draw rotated, scaled, and translated rectangle
drawRectangle([0, 0, 255, 255], rotationValue, [scaleX, scaleY], [translationX, translationY]);
// Render to screen
gl.flush();
}
animate();A new uniform translation controls the rectangle’s translation. The animation loop calculates translation values that change over time, making the rectangle rotate, scale, and move within the canvas.
WebGL also supports texture mapping for complex 2D images. Assuming we have an image named texture.png, we can load and apply it to the rectangle:
// Load texture
const texture = loadTexture(gl, 'texture.png');
// Apply texture in drawRectangle
function drawRectangle(color, rotation, scale, translation, texture) {
// ...
gl.activeTexture(gl.TEXTURE0);
gl.bindTexture(gl.TEXTURE_2D, texture);
gl.uniform1i(gl.getUniformLocation(program, 'texture'), 0);
// ...
}
// Create texture
function loadTexture(gl, url) {
const texture = gl.createTexture();
gl.bindTexture(gl.TEXTURE_2D, texture);
const level = 0;
const internalFormat = gl.RGBA;
const border = 0;
const srcFormat = gl.RGBA;
const srcType = gl.UNSIGNED_BYTE;
const pixelData = null;
gl.texImage2D(
gl.TEXTURE_2D,
level,
internalFormat,
border,
srcFormat,
srcType,
pixelData
);
const image = new Image();
image.onload = () => {
gl.bindTexture(gl.TEXTURE_2D, texture);
gl.texImage2D(gl.TEXTURE_2D, level, internalFormat, gl.RGBA, gl.UNSIGNED_BYTE, image);
gl.generateMipmap(gl.TEXTURE_2D);
gl.texParameteri(gl.TEXTURE_2D, gl.TEXTURE_MIN_FILTER, gl.LINEAR_MIPMAP_LINEAR);
gl.texParameteri(gl.TEXTURE_2D, gl.TEXTURE_MAG_FILTER, gl.LINEAR);
gl.texParameteri(gl.TEXTURE_2D, gl.TEXTURE_WRAP_S, gl.CLAMP_TO_EDGE);
gl.texParameteri(gl.TEXTURE_2D, gl.TEXTURE_WRAP_T, gl.CLAMP_TO_EDGE);
};
image.src = url;
return texture;
}
// Modify drawRectangle call to pass texture
drawRectangle([1, 1, 1, 1], rotationValue, [scaleX, scaleY], [translationX, translationY], texture);The image now transforms with rotation, scaling, and translation.
Lighting
In WebGL, lighting is typically computed in the fragment shader, requiring the definition of light source position, color, and object surface properties (e.g., normals).
const fragmentShaderSource = `
precision mediump float;
uniform vec4 color;
uniform vec3 lightPosition;
varying vec3 vNormal;
varying vec3 vWorldPosition;
void main() {
vec3 lightDirection = normalize(lightPosition - vWorldPosition);
vec3 diffuse = max(dot(vNormal, lightDirection), 0.0) * color.rgb;
gl_FragColor = vec4(diffuse, color.a);
}
`;
// Pass normals and world position in vertex shader
const vertexShaderSource = `
attribute vec3 position;
attribute vec3 normal;
uniform mat4 modelViewMatrix;
uniform mat4 projectionMatrix;
varying vec3 vNormal;
varying vec3 vWorldPosition;
void main() {
vNormal = normalize(mat3(modelViewMatrix) * normal);
vWorldPosition = vec3(modelViewMatrix * vec4(position, 1.0));
gl_Position = projectionMatrix * vec4(vWorldPosition, 1.0);
}
`;This code calculates the dot product of the light direction and the surface normal to determine diffuse lighting intensity. Note that actual lighting models may be more complex, including specular highlights, ambient light, etc.
Depth Testing
WebGL enables depth testing by default, but you can confirm it with gl.enable(gl.DEPTH_TEST). Depth testing determines which pixels are in the foreground or background. For 3D shapes, draw distant objects first, then closer ones.
gl.enable(gl.DEPTH_TEST);
gl.depthFunc(gl.LESS); // Use less-than comparisonBlending Mode
In WebGL, blending modes are set using gl.blendFunc and gl.blendEquation.
gl.enable(gl.BLEND);
gl.blendEquation(gl.FUNC_ADD);
gl.blendFunc(gl.SRC_ALPHA, gl.ONE_MINUS_SRC_ALPHA);3D Mathematics Fundamentals in WebGL
In WebGL, 3D mathematics is an essential component, as it underpins computations related to geometry, transformations, and shading. Below are key concepts of 3D mathematics and their applications in WebGL:
Coordinate System
Right-Handed Coordinate System: WebGL uses a right-handed coordinate system, where the positive X-axis points right, the positive Y-axis points up, and the positive Z-axis points out of the screen (typically toward the back of the monitor).
Vectors
Vectors represent direction and magnitude, typically stored as arrays like [x, y, z]. In WebGL, vectors are used to represent positions, velocities, colors, and more.
Matrices
Matrices describe transformations such as translation, rotation, and scaling. In WebGL, 4×4 matrices are commonly used because they can handle both position and transformation simultaneously.
Translation
A translation matrix moves the origin to a new position. For a translation vector [tx, ty, tz], the matrix is created as follows:
var translateMatrix = mat4.create();
mat4.translate(translateMatrix, translateMatrix, [tx, ty, tz]);Rotation
Rotation is performed around a specific axis. For example, a matrix for rotating angle degrees around the X-axis is:
var rotateXMatrix = mat4.create();
mat4.rotateX(rotateXMatrix, rotateXMatrix, angle);For rotations around the Y-axis and Z-axis, use mat4.rotateY() and mat4.rotateZ().
Scaling
A scaling matrix applies independent scaling along each axis. For scaling factors [sx, sy, sz], the matrix is:
var scaleMatrix = mat4.create();
mat4.scale(scaleMatrix, scaleMatrix, [sx, sy, sz]);Composite Transformations
Multiple transformations can be combined via matrix multiplication. For example, to translate and then rotate:
var transformationMatrix = mat4.create();
mat4.multiply(transformationMatrix, translateMatrix, rotateXMatrix);Transformations in Vertex Shaders
In vertex shaders, model-space vertex coordinates are typically transformed to clip space using the model-view-projection (MVP) matrix:
attribute vec3 a_position;
uniform mat4 u_modelMatrix;
uniform mat4 u_viewMatrix;
uniform mat4 u_projectionMatrix;
void main() {
gl_Position = u_projectionMatrix * u_viewMatrix * u_modelMatrix * vec4(a_position, 1.0);
}Matrix Stack
In practice, a matrix stack (e.g., using mat4.perspective(), mat4.lookAt()) simplifies transformation management, allowing easy organization and reversion of transformations.
Vector and Matrix Operation Libraries
In JavaScript, libraries like gl-matrix handle 3D mathematics. For example, methods like mat4.create() and mat4.translate() used above are part of the gl-matrix library.
Normals
Normal vectors indicate the direction perpendicular to a surface, used for lighting calculations. In WebGL, normals are transformed from model space to view space for accurate lighting. In a vertex shader, this is typically done as follows:
attribute vec3 a_normal;
varying vec3 v_normal;
uniform mat4 u_normalMatrix;
void main() {
v_normal = normalize(mat3(u_normalMatrix) * a_normal);
// ...other transformations...
}- Note: The normal matrix (
u_normalMatrix) is usually the inverse transpose of the model-view matrix to maintain the normal’s direction.
Texture Coordinates
Texture coordinates map 2D textures onto 3D model surfaces. In the vertex shader, texture coordinates are passed to the fragment shader:
attribute vec2 a_texCoord;
varying vec2 v_texCoord;
void main() {
// ...other transformations...
v_texCoord = a_texCoord;
}In the fragment shader, texture coordinates are used to sample textures:
uniform sampler2D u_texture;
varying vec2 v_texCoord;
void main() {
gl_FragColor = texture2D(u_texture, v_texCoord);
}Viewport Transformation
The viewport transformation converts clip-space coordinates to screen space. This is handled by the gl.viewport() function, which sets the target rendering area for WebGL.
Depth Testing
Depth testing determines which pixels should occlude others. WebGL enables depth testing by default, and you can toggle it with gl.enable(gl.DEPTH_TEST).
Blending
Blending combines newly drawn colors with existing ones, enabling transparency effects, often used with textures that have an alpha channel:
gl.enable(gl.BLEND);
gl.blendFuncSeparate(gl.SRC_ALPHA, gl.ONE_MINUS_SRC_ALPHA, gl.ONE, gl.ONE_MINUS_SRC_ALPHA);Depth Buffer
The depth buffer stores depth values for each pixel, used in depth testing. WebGL creates a depth buffer by default, but you can configure it:
gl.clearDepth(1.0);
gl.clear(gl.DEPTH_BUFFER_BIT);Clipping
Clipping restricts the rendering area. Use gl.enable(gl.SCISSOR_TEST) and gl.scissor() to set a clipping region.
Projection
Projection transforms 3D space points into 2D screen space. Common projection types in WebGL include perspective and orthographic projections. Perspective projection simulates human depth perception, while orthographic projection is used for 2D views or isometric 3D scenes.
Perspective Projection, typically used for 3D scenes, is generated with mat4.perspective():
var fieldOfView = 45 * Math.PI / 180; // 45 degrees
var aspect = canvas.width / canvas.height; // Aspect ratio
var near = 0.1; // Near clipping plane
var far = 100; // Far clipping plane
var projectionMatrix = mat4.create();
mat4.perspective(projectionMatrix, fieldOfView, aspect, near, far);Orthographic Projection, used for 2D scenes or uniformly scaled 3D scenes, is generated with mat4.ortho():
var left = -1; // Left boundary
var right = 1; // Right boundary
var bottom = -1; // Bottom boundary
var top = 1; // Top boundary
var near = -1; // Near clipping plane
var far = 1; // Far clipping plane
var orthoMatrix = mat4.create();
mat4.ortho(orthoMatrix, left, right, bottom, top, near, far);Frustum Culling
Frustum culling is an optimization technique to avoid rendering geometry outside the view frustum. While WebGL’s depth testing includes basic frustum culling, you can manually check if objects are within the frustum for additional efficiency.
Culling
Culling is another optimization technique to skip rendering invisible faces. WebGL supports back-face culling, enabled by default, which checks whether a face’s normal points toward the viewer.
Vertex Buffer Objects (VBOs)
Vertex Buffer Objects store vertex data, improving performance by reducing memory copies. Create and bind a VBO using gl.bindBuffer() and gl.bufferData():
var vertices = ... // Vertex data
var vertexBuffer = gl.createBuffer();
gl.bindBuffer(gl.ARRAY_BUFFER, vertexBuffer);
gl.bufferData(gl.ARRAY_BUFFER, new Float32Array(vertices), gl.STATIC_DRAW);Index Buffer Objects (EBOs)
Index Buffer Objects specify the order of vertices to form polygons. Create and bind an EBO using gl.bindBuffer() and gl.bufferData():
var indices = ... // Index data
var indexBuffer = gl.createBuffer();
gl.bindBuffer(gl.ELEMENT_ARRAY_BUFFER, indexBuffer);
gl.bufferData(gl.ELEMENT_ARRAY_BUFFER, new Uint16Array(indices), gl.STATIC_DRAW);Texture Units
WebGL supports multiple texture units, allowing simultaneous use of multiple textures. Switch and set texture units with gl.activeTexture() and gl.uniform1i():
gl.activeTexture(gl.TEXTURE0);
gl.bindTexture(gl.TEXTURE_2D, texture1);
gl.uniform1i(shader.uniforms.u_texture1, 0);
gl.activeTexture(gl.TEXTURE1);
gl.bindTexture(gl.TEXTURE_2D, texture2);
gl.uniform1i(shader.uniforms.u_texture2, 1);Creating 3D Objects in WebGL
Preparing the WebGL Context
First, obtain the WebGL context from an HTML5 <canvas> element:
<canvas id="webgl-canvas"></canvas>const canvas = document.getElementById('webgl-canvas');
const gl = canvas.getContext('webgl');Defining Geometry
The first step in creating a 3D object is defining its vertex data. Here’s an example for a simple cube:
const vertices = [
// Face 1 (front)
-1, -1, 1, // 0
1, -1, 1, // 1
1, 1, 1, // 2
-1, 1, 1, // 3
// Face 2 (back)
-1, -1, -1, // 4
-1, 1, -1, // 5
1, 1, -1, // 6
1, -1, -1, // 7
// ...other faces...
];Creating Vertex Buffer Objects
Store vertex data in a Vertex Buffer Object (VBO) for efficient processing by WebGL:
const vertexBuffer = gl.createBuffer();
gl.bindBuffer(gl.ARRAY_BUFFER, vertexBuffer);
gl.bufferData(gl.ARRAY_BUFFER, new Float32Array(vertices), gl.STATIC_DRAW);Setting Vertex Attributes
Declare vertex attributes in the vertex shader and associate them with the vertex buffer in the main program:
const vertexShaderSource = `
attribute vec3 a_position;
void main() {
gl_Position = vec4(a_position, 1.0);
}
`;
const positionAttributeLocation = gl.getAttribLocation(program, 'a_position');
gl.enableVertexAttribArray(positionAttributeLocation);
gl.bindBuffer(gl.ARRAY_BUFFER, vertexBuffer);
gl.vertexAttribPointer(positionAttributeLocation, 3, gl.FLOAT, false, 0, 0);Defining Geometry Faces
Assign vertices to each face using an Element Array Buffer (index buffer):
const indices = [
// Face 1
0, 1, 2, 2, 3, 0,
// Face 2
4, 5, 6, 6, 7, 4,
// ...other faces...
];
const indexBuffer = gl.createBuffer();
gl.bindBuffer(gl.ELEMENT_ARRAY_BUFFER, indexBuffer);
gl.bufferData(gl.ELEMENT_ARRAY_BUFFER, new Uint16Array(indices), gl.STATIC_DRAW);Creating Transformation Matrices
Create model, view, and projection matrices to transform 3D geometry into 2D screen space:
const modelMatrix = mat4.create();
const viewMatrix = mat4.create();
const projectionMatrix = mat4.create();
// Example: Rotate the cube
mat4.rotate(modelMatrix, modelMatrix, Math.PI / 2, [1, 0, 0]); // 90 degrees around X-axis
// Example: Set camera position
mat4.translate(viewMatrix, viewMatrix, [-3, 0, -5]);Handling Lighting
Compute lighting effects in the fragment shader. Here’s an example with a simple point light:
const fragmentShaderSource = `
precision mediump float;
uniform vec3 u_lightPosition;
uniform vec3 u_color;
varying vec3 v_normal;
void main() {
vec3 lightDirection = normalize(u_lightPosition - v_worldPosition);
float intensity = max(dot(v_normal, lightDirection), 0.0);
vec3 litColor = u_color * intensity;
gl_FragColor = vec4(litColor, 1.0);
}
`;Binding Textures
Load and apply textures to the 3D object:
const texture = loadTexture(gl, 'texture.jpg');
function loadTexture(gl, url) {
const texture = gl.createTexture();
gl.bindTexture(gl.TEXTURE_2D, texture);
gl.texParameteri(gl.TEXTURE_2D, gl.TEXTURE_WRAP_S, gl.CLAMP_TO_EDGE);
gl.texParameteri(gl.TEXTURE_2D, gl.TEXTURE_WRAP_T, gl.CLAMP_TO_EDGE);
gl.texParameteri(gl.TEXTURE_2D, gl.TEXTURE_MIN_FILTER, gl.LINEAR);
gl.texParameteri(gl.TEXTURE_2D, gl.TEXTURE_MAG_FILTER, gl.LINEAR);
const image = new Image();
image.onload = () => {
gl.bindTexture(gl.TEXTURE_2D, texture);
gl.texImage2D(gl.TEXTURE_2D, 0, gl.RGBA, gl.RGBA, gl.UNSIGNED_BYTE, image);
gl.generateMipmap(gl.TEXTURE_2D);
};
image.src = url;
return texture;
}Drawing 3D Objects
Draw the 3D object in the main loop:
function drawScene() {
gl.viewport(0, 0, canvas.width, canvas.height);
gl.clearColor(0.1, 0.1, 0.1, 1.0);
gl.clear(gl.COLOR_BUFFER_BIT | gl.DEPTH_BUFFER_BIT);
gl.enable(gl.DEPTH_TEST);
gl.enable(gl.BLEND);
gl.blendFunc(gl.SRC_ALPHA, gl.ONE_MINUS_SRC_ALPHA);
// Compute model-view-projection matrix
const mvpMatrix = mat4.create();
mat4.multiply(mvpMatrix, projectionMatrix, viewMatrix);
mat4.multiply(mvpMatrix, mvpMatrix, modelMatrix);
// Bind texture
gl.activeTexture(gl.TEXTURE0);
gl.bindTexture(gl.TEXTURE_2D, texture);
// Draw cube
gl.useProgram(shaderProgram);
gl.uniformMatrix4fv(shaderProgram.uniforms.u_mvpMatrix, false, mvpMatrix);
gl.uniform3fv(shaderProgram.uniforms.u_lightPosition, [0, 10, 0]);
gl.uniform3fv(shaderProgram.uniforms.u_color, [1, 0, 0]); // Red
gl.bindBuffer(gl.ELEMENT_ARRAY_BUFFER, indexBuffer);
gl.drawElements(gl.TRIANGLES, indices.length, gl.UNSIGNED_SHORT, 0);
}
setInterval(drawScene, 1000 / 60);Performance Optimization
- Use VBOs and EBOs to reduce memory copying and improve efficiency.
- Implement frustum culling and back-face culling to minimize unnecessary rendering.
- Use Web Workers for complex computational tasks.
- Employ batch rendering to reduce draw calls.
- Utilize Level of Detail (LOD) to adjust detail based on distance.
- Use instancing to draw multiple similar objects efficiently.
Adding Animation
To make the 3D object dynamic, modify the model matrix’s rotation, translation, or scaling values. For example, create a rotation animation:
let rotationAngle = 0;
function animate() {
requestAnimationFrame(animate);
// Rotate cube
rotationAngle += 0.01;
mat4.fromRotation(modelMatrix, rotationAngle, [0, 1, 0]);
drawScene();
}
animate();Complex Lighting Models
A simple point light may not suffice for realistic lighting. Implement advanced lighting models with ambient, diffuse, specular, and Fresnel effects:
const fragmentShaderSource = `
precision mediump float;
uniform vec3 u_ambientLight;
uniform vec3 u_diffuseLight;
uniform vec3 u_specularLight;
uniform vec3 u_eyePosition;
uniform vec3 u_shininess;
varying vec3 v_normal;
varying vec3 v_worldPosition;
void main() {
vec3 lightDirection;
vec3 viewDirection;
vec3 ambient, diffuse, specular;
// Ambient light
ambient = u_ambientLight;
// Diffuse light
lightDirection = normalize(u_lightPosition - v_worldPosition);
diffuse = u_diffuseLight * max(dot(v_normal, lightDirection), 0.0);
// Specular light
viewDirection = normalize(u_eyePosition - v_worldPosition);
vec3 halfVector = normalize(lightDirection + viewDirection);
specular = u_specularLight * pow(max(dot(v_normal, halfVector), 0.0), u_shininess);
vec3 litColor = ambient + diffuse + specular;
gl_FragColor = vec4(litColor, 1.0);
}
`;Texture Mapping
Beyond colors, textures add detail to 3D objects. In the fragment shader, sample textures using texture coordinates:
uniform sampler2D u_texture;
varying vec2 v_texCoord;
void main() {
vec4 texColor = texture2D(u_texture, v_texCoord);
vec3 litColor = ... // Compute lighting
gl_FragColor = vec4(litColor * texColor.rgb, texColor.a);
}In the main program, set texture coordinate attributes and pass the texture during drawing:
const texCoordAttributeLocation = gl.getAttribLocation(shaderProgram, 'a_texCoord');
gl.enableVertexAttribArray(texCoordAttributeLocation);
gl.bindBuffer(gl.ARRAY_BUFFER, texCoordBuffer);
gl.vertexAttribPointer(texCoordAttributeLocation, 2, gl.FLOAT, false, 0, 0);
gl.uniform1i(shaderProgram.uniforms.u_texture, 0);
gl.activeTexture(gl.TEXTURE0);
gl.bindTexture(gl.TEXTURE_2D, texture);Blending Modes
WebGL supports various blending modes, configurable via gl.blendFuncSeparate() and gl.blendEquationSeparate(). For example, to achieve translucent blending:
gl.blendFuncSeparate(gl.SRC_ALPHA, gl.ONE_MINUS_SRC_ALPHA, gl.ONE, gl.ONE_MINUS_SRC_ALPHA);
gl.blendEquationSeparate(gl.FUNC_ADD, gl.FUNC_ADD);Advanced Techniques
- Shadow Mapping: Use additional render passes and depth maps to simulate shadows cast by objects.
- Normal Mapping: Apply normal maps to simulate highlights and bumps without extra geometry.
- Environment Mapping: Use environment maps to simulate reflections of the surrounding environment.
- GPU Particle Systems: Leverage GPU parallelism for particle effects like sparks, smoke, or water waves.
- Screen-Space Post-Processing: Apply effects like blur, color correction, or anti-aliasing to the entire screen after 3D rendering.



