Lesson 02-WebGL Basic 3D Physics Model Development

WebGL Textures and Materials

In WebGL, textures and materials are crucial for achieving realistic visual effects on 3D object surfaces. Textures are image data, while materials define how these textures are applied to simulate surface properties.

Understanding Textures

Textures are 2D images mapped onto 3D object surfaces to add detail and realism. In WebGL, TEXTURE_2D is commonly used, supporting RGB or RGBA images, as well as specialized formats like cube maps or depth maps.

Creating a Texture Object

const texture = gl.createTexture();

Loading a Texture Image

const image = new Image();
image.onload = () => {
  gl.bindTexture(gl.TEXTURE_2D, texture);
  gl.texImage2D(gl.TEXTURE_2D, 0, gl.RGBA, gl.RGBA, gl.UNSIGNED_BYTE, image);
  gl.generateMipmap(gl.TEXTURE_2D);
};
image.src = 'texture.jpg';

Setting Texture Parameters

Texture parameters control filtering (how texture coordinates are interpolated) and edge handling:

gl.texParameteri(gl.TEXTURE_2D, gl.TEXTURE_WRAP_S, gl.REPEAT); // Repeat horizontally
gl.texParameteri(gl.TEXTURE_2D, gl.TEXTURE_WRAP_T, gl.REPEAT); // Repeat vertically
gl.texParameteri(gl.TEXTURE_2D, gl.TEXTURE_MIN_FILTER, gl.LINEAR_MIPMAP_LINEAR); // Linear filtering with mipmaps
gl.texParameteri(gl.TEXTURE_2D, gl.TEXTURE_MAG_FILTER, gl.LINEAR); // Linear filtering

Understanding Materials

Materials are collections of parameters describing surface properties, such as color, shininess, transparency, and roughness. In WebGL, materials are implemented through shader programs, using uniforms to control properties.

Common Material Properties

  • Color: The base color of the object, passed as a vec3 or vec4.
  • Shininess: Controls the intensity of specular highlights.
  • Ambient Light: Global illumination effect on the object.
  • Diffuse Light: Simulates direct light interaction with the surface.
  • Specular Light: Simulates reflective highlights.
  • Opacity: The object’s transparency, typically passed as a float.

Applying Materials

Vertex shaders typically don’t require material properties, but fragment shaders use them to compute the final color:

precision mediump float;

uniform vec3 u_color;
uniform float u_opacity;
uniform vec3 u_ambientLight;
uniform vec3 u_diffuseLight;
uniform vec3 u_specularLight;
uniform vec3 u_shininess;
uniform vec3 u_eyePosition;

// ...other inputs...

void main() {
  vec3 ambient = u_ambientLight;
  vec3 diffuse = u_diffuseLight * max(dot(normal, lightDirection), 0.0);
  vec3 specular = u_specularLight * pow(max(dot(normal, reflect(-lightDirection, normal)), 0.0), u_shininess);

  vec3 litColor = ambient + diffuse + specular;
  vec4 finalColor = vec4(litColor * u_color, u_opacity);

  gl_FragColor = finalColor;
}

Texture Mapping

To apply textures to 3D objects, set texture coordinates in the vertex shader and sample the texture in the fragment shader:

// Vertex Shader
attribute vec2 a_texCoord;
varying vec2 v_texCoord;

void main() {
  // ...other calculations...

  v_texCoord = a_texCoord;
}

// Fragment Shader
uniform sampler2D u_texture;

void main() {
  vec4 texColor = texture2D(u_texture, v_texCoord);
  vec3 litColor = ... // Compute lighting

  gl_FragColor = vec4(litColor * texColor.rgb, texColor.a);
}

In the main program, set texture coordinate attributes and texture units:

const texCoordAttributeLocation = gl.getAttribLocation(shaderProgram, 'a_texCoord');
gl.enableVertexAttribArray(texCoordAttributeLocation);
gl.bindBuffer(gl.ARRAY_BUFFER, texCoordBuffer);
gl.vertexAttribPointer(texCoordAttributeLocation, 2, gl.FLOAT, false, 0, 0);

gl.activeTexture(gl.TEXTURE0);
gl.bindTexture(gl.TEXTURE_2D, texture);
gl.uniform1i(shaderProgram.uniforms.u_texture, 0);

Advanced Texture Techniques

  • Cubemaps: Simulate environmental reflections.
  • Normal Maps: Simulate surface bumps and roughness.
  • Environment Masking: Control lighting edge effects.
  • Displacement Maps: Alter surface height for detailed effects.
  • Color Space Conversion: Convert from sRGB to linear space for accurate color rendering.

Implementing a Material System

To streamline material management, create a Material class that encapsulates properties and methods for setting and applying materials:

class Material {
  constructor(color, shininess) {
    this.color = color;
    this.shininess = shininess;
    // ...other material properties...
  }

  applyMaterial(shaderProgram) {
    gl.uniform3fv(shaderProgram.uniforms.u_color, this.color);
    gl.uniform1f(shaderProgram.uniforms.u_shininess, this.shininess);
    // ...apply other material properties...
  }
}

const myMaterial = new Material([1, 0, 0], 100);

Apply the material before drawing:

myMaterial.applyMaterial(shaderProgram);
drawObject();

Texture Coordinates and Generation

Texture coordinates are key to mapping textures onto 3D surfaces. Each vertex typically has corresponding texture coordinates, ranging from [0, 1], where (0, 0) is the bottom-left corner and (1, 1) is the top-right corner of the texture.

Manual Texture Coordinates

Specify texture coordinates directly in vertex data if you know how vertices map to the texture:

const vertices = [
  // Vertex position and texture coordinates
  -1, -1, 0, 0, 0,
   1, -1, 1, 0,
   1,  1, 1, 1,
  -1,  1, 0, 1
];

// ...create vertex and texture coordinate buffers...

Automatic Texture Coordinate Generation

To stretch or repeat textures in specific ways, generate coordinates programmatically:

function generateTextureCoordinates(numVertices) {
  const textureCoordinates = new Float32Array(numVertices * 2);

  for (let i = 0; i < numVertices; i++) {
    const u = i / (numVertices - 1);
    const v = i % 2 === 0 ? 0 : 1;
    textureCoordinates[i * 2] = u;
    textureCoordinates[i * 2 + 1] = v;
  }

  return textureCoordinates;
}

Texture Mapping Modes

WebGL offers texture mapping modes, controlled by TEXTURE_WRAP_S and TEXTURE_WRAP_T parameters for horizontal and vertical directions:

  • CLAMP_TO_EDGE: Clamps texture coordinates to 0 or 1, preventing repetition.
  • REPEAT: Repeats the texture horizontally and vertically.
  • MIRRORED_REPEAT: Repeats the texture with mirroring at each repetition.

Texture Filtering

Texture filtering determines how texture colors are interpolated when coordinates fall outside [0, 1]. Controlled by TEXTURE_MIN_FILTER and TEXTURE_MAG_FILTER:

  • NEAREST: Uses the nearest pixel color, potentially causing pixelation.
  • LINEAR: Linear interpolation for smoother results, though it may blur details.
  • NEAREST_MIPMAP_NEAREST and LINEAR_MIPMAP_NEAREST: Select the nearest mipmap level for interpolation.
  • NEAREST_MIPMAP_LINEAR and LINEAR_MIPMAP_LINEAR: Linearly interpolate between mipmap levels.

Texture Compression

Texture compression reduces memory usage and bandwidth, especially on mobile devices. WebGL supports formats like S3TC (DXTn), ETC1, and PVRTC. Check for browser/device support and use corresponding extensions:

const compressedTexImage2D = gl.compressedTexImage2D;
const COMPRESSED_RGB_S3TC_DXT1_EXT = gl.getParameter(gl.COMPRESSED_TEXTURE_FORMATS)[0];

compressedTexImage2D(gl.TEXTURE_2D, 0, COMPRESSED_RGB_S3TC_DXT1_EXT, width, height, 0, data);

Texture Units and Texture Arrays

WebGL supports multiple texture units for applying multiple textures simultaneously. Use gl.activeTexture() to switch units and gl.uniform1i() to set shader texture unit locations:

gl.activeTexture(gl.TEXTURE1);
gl.bindTexture(gl.TEXTURE_2D, texture2);
gl.uniform1i(shader.uniforms.u_texture2, 1);

Texture arrays store multiple 2D textures in a single object, accessed via an additional texture coordinate dimension.

Dynamic Textures and Video Textures

WebGL allows dynamic texture updates, useful for real-time rendering or video textures:

const texture = gl.createTexture();
gl.bindTexture(gl.TEXTURE_2D, texture);
gl.texImage2D(gl.TEXTURE_2D, 0, gl.RGBA, gl.RGBA, gl.UNSIGNED_BYTE, videoElement);

Update the texture during video playback:

function updateTexture() {
  if (videoElement.readyState >= videoElement.HAVE_CURRENT_DATA) {
    gl.bindTexture(gl.TEXTURE_2D, texture);
    gl.texSubImage2D(gl.TEXTURE_2D, 0, 0, 0, gl.RGBA, gl.UNSIGNED_BYTE, videoElement);
  }
  requestAnimationFrame(updateTexture);
}
updateTexture();

Texture Mapping Deformations

Beyond simple planar mapping, WebGL supports advanced texture mapping techniques for complex surface effects:

  • Offset and Scale: Adjust texture position and size via texture coordinates:
vec2 offset = vec2(0.1, 0.2);
vec2 scale = vec2(1.5, 0.8);
v_texCoord = (a_texCoord - offset) * scale;
  • Tiling and Wrapping: Create distortion effects using texture coordinates and mathematical functions (e.g., sin, cos):
v_texCoord = vec2(
  a_texCoord.x + sin(a_texCoord.y * 10.0) * 0.1,
  a_texCoord.y + cos(a_texCoord.x * 10.0) * 0.1
);
  • Parallax Mapping: Simulate surface depth based on normals and viewing angle.
  • Cube Map Environment Mapping: Use a six-face cubemap to simulate environmental reflections.

Extending Material Properties

Beyond basic color, shininess, and opacity, materials can include additional properties for complex surface effects:

  • Metallic: Indicates whether the surface reflects light like metal.
  • Roughness: Affects the spread of specular highlights.
  • Ambient Occlusion (AO): Simulates shadows on surfaces for added depth.
  • Normal: Used for normal mapping to alter lighting effects.
  • Environment Masking: Controls light attenuation at edges.

In shaders, compute the final color based on these properties:

// ...other inputs...

vec3 baseColor = u_baseColor.rgb;
float metallic = u_metallic;
float roughness = u_roughness;
vec3 ao = u_ao;
vec3 normal = normalize(u_normal);
vec3 reflectedColor = ... // Compute reflection using cubemap

vec3 ambient = u_ambientLight * baseColor * ao;
vec3 diffuse = u_diffuseLight * max(dot(normal, lightDirection), 0.0);
vec3 specular = ... // Compute based on metallic, normal, and viewDirection

vec3 litColor = ambient + diffuse + specular + reflectedColor;
vec4 finalColor = vec4(litColor, u_opacity);

gl_FragColor = finalColor;

Material Library

Organize and manage materials with a material library to store preset materials for reuse in scenes:

class MaterialLibrary {
  constructor() {
    this.materials = {};
  }

  add(name, material) {
    this.materials[name] = material;
  }

  get(name) {
    return this.materials[name];
  }
}

const materialLibrary = new MaterialLibrary();
materialLibrary.add('red', new Material([1, 0, 0], 100));

Apply materials from the library during rendering:

const myMaterial = materialLibrary.get('red');
myMaterial.applyMaterial(shaderProgram);
drawObject();

Texture Blending

Blend multiple textures in the fragment shader for complex surface effects, such as layering textures or creating gradient transitions:

uniform sampler2D u_texture1;
uniform sampler2D u_texture2;

void main() {
  vec4 texColor1 = texture2D(u_texture1, v_texCoord);
  vec4 texColor2 = texture2D(u_texture2, v_texCoord);
  vec4 finalColor = mix(texColor1, texColor2, u_mixFactor);

  gl_FragColor = finalColor;
}

Performance Optimization

  • Texture Compression: Reduce memory usage and load times.
  • Texture Atlasing: Combine multiple small textures into one to minimize texture switches.
  • Mipmapping: Improve texture sampling speed, especially for distant objects.
  • Texture Filtering: Balance quality and performance with appropriate filtering strategies.
  • Texture Unit Reuse: Minimize texture unit switches.

Summary

Textures and materials are key to achieving realism in WebGL 3D graphics. Understanding their mechanics and application enables the creation of rich 3D scenes. Advanced techniques like normal mapping and environment masking further enhance visual quality. Continuous learning and experimentation will deepen your mastery of 3D graphics programming.

WebGL Complex Geometries and Model Loading

In WebGL, creating complex geometries typically involves handling numerous vertices and faces, while model loading entails importing external file formats such as OBJ, GLTF, or FBX. This document provides an in-depth exploration of methods for creating complex geometries and loading and rendering 3D models in WebGL.

Creating Complex Geometries

Building from Basic Shapes

WebGL’s basic shapes include cubes, spheres, cylinders, and more. These are often generated using parameterized mathematical functions and combined to form complex geometries. Below is an example of creating a cube:

function createBox(width, height, depth) {
  const vertices = [
    // Front face
    -width, -height, depth,
    width, -height, depth,
    width, height, depth,
    -width, height, depth,

    // Back face
    -width, -height, -depth,
    -width, height, -depth,
    width, height, -depth,
    width, -height, -depth,

    // ...other faces...
  ];

  // ...create indices and vertex buffers...
  return { vertices, indices };
}

Combining Geometries

Complex geometries can be created by combining multiple basic shapes. For example, to create a cube with rounded corners:

function createRoundedBox(width, height, depth, radius) {
  const box = createBox(width, height, depth);
  const sphere = createSphere(radius, 32, 32); // Use sphere for rounded corners

  // ...intersect sphere with cube faces to get rounded cube vertices...
  // ...update vertex buffers...
}

Generating Geometries from Mathematical Functions

More complex geometries, such as cones, frustums, or irregular shapes, can be generated directly from mathematical functions. For example, creating a frustum:

function createCylinder(radius1, radius2, height, slices, stacks) {
  const vertices = [];
  const indices = [];

  // ...generate vertices and indices...
  // ...create vertex buffers...
  return { vertices, indices };
}

3D Model Loading

3D Model File Formats

Common 3D model formats include OBJ, GLTF, and FBX. WebGL does not natively support these formats, so third-party libraries are used for loading and parsing.

OBJ Loading

OBJ is a text-based format describing a 3D model’s vertices, faces, and texture coordinates. It can be loaded easily with the three.js library:

import { OBJLoader } from 'three/examples/jsm/loaders/OBJLoader';

const loader = new OBJLoader();
loader.load('path/to/model.obj', (object) => {
  scene.add(object);
});

GLTF Loading

GLTF is a modern 3D model format that includes geometry, materials, textures, and animations. It is supported by three.js:

import { GLTFLoader } from 'three/examples/jsm/loaders/GLTFLoader';

const loader = new GLTFLoader();
loader.load('path/to/model.gltf', (gltf) => {
  scene.add(gltf.scene);
});

FBX Loading

FBX, an Autodesk format, is widely used in games and film. It can be loaded using the fbx-loader library:

import { FBXLoader } from 'three/examples/jsm/loaders/FBXLoader';

const loader = new FBXLoader();
loader.load('path/to/model.fbx', (object) => {
  scene.add(object);
});

Rendering Complex Geometries and Models

The rendering process for both geometries and models is similar:

  1. Bind vertex buffer objects.
  2. Bind index buffer objects.
  3. Set model, view, and projection matrices.
  4. Call drawElements or drawArrays to render the geometry.

For 3D models, data is typically converted into WebGL-compatible formats:

function renderModel(model) {
  gl.bindBuffer(gl.ARRAY_BUFFER, model.verticesBuffer);
  gl.vertexAttribPointer(vertexAttributeLocation, 3, gl.FLOAT, false, 0, 0);

  gl.bindBuffer(gl.ELEMENT_ARRAY_BUFFER, model.indicesBuffer);
  gl.drawElements(gl.TRIANGLES, model.numIndices, gl.UNSIGNED_SHORT, 0);
}

Meshes and Mesh Shaders

In WebGL, complex geometries are represented as meshes, consisting of vertex arrays and index arrays. Mesh shaders process these meshes, typically divided into vertex and fragment shaders.

Mesh Structure

A mesh comprises vertices and indices, with indices defining how vertices connect to form faces. In JavaScript, a mesh can be structured as follows:

class Mesh {
  constructor(vertices, indices) {
    this.vertices = vertices;
    this.indices = indices;
    // ...create vertex and index buffers...
  }

  draw(shaderProgram) {
    gl.bindBuffer(gl.ARRAY_BUFFER, this.verticesBuffer);
    gl.vertexAttribPointer(shaderProgram.attribs.position, 3, gl.FLOAT, false, 0, 0);

    gl.bindBuffer(gl.ELEMENT_ARRAY_BUFFER, this.indicesBuffer);
    gl.drawElements(gl.TRIANGLES, this.indices.length, gl.UNSIGNED_SHORT, 0);
  }
}

Mesh Shaders

Vertex shaders process each vertex, computing its screen position, normals, texture coordinates, etc. Fragment shaders process each pixel, calculating its final color.

// Vertex Shader
attribute vec3 a_position;
attribute vec3 a_normal;
attribute vec2 a_texCoord;

uniform mat4 u_modelMatrix;
uniform mat4 u_viewMatrix;
uniform mat4 u_projectionMatrix;

varying vec3 v_normal;
varying vec2 v_texCoord;

void main() {
  gl_Position = u_projectionMatrix * u_viewMatrix * u_modelMatrix * vec4(a_position, 1.0);
  v_normal = normalize(mat3(u_modelMatrix) * a_normal);
  v_texCoord = a_texCoord;
}

// Fragment Shader
precision mediump float;

uniform vec3 u_ambientLight;
uniform vec3 u_diffuseLight;
uniform vec3 u_specularLight;
uniform vec3 u_shininess;
uniform sampler2D u_texture;

varying vec3 v_normal;
varying vec2 v_texCoord;

void main() {
  vec3 normal = normalize(v_normal);
  vec3 lightDirection = normalize(u_diffuseLight);
  vec3 viewDirection = normalize(-v_normal);

  vec3 ambient = u_ambientLight;
  vec3 diffuse = u_diffuseLight * max(dot(normal, lightDirection), 0.0);
  vec3 specular = u_specularLight * pow(max(dot(normal, reflect(lightDirection, normal)), 0.0), u_shininess);

  vec4 texColor = texture2D(u_texture, v_texCoord);
  vec3 litColor = ambient + diffuse + specular;

  gl_FragColor = vec4(litColor * texColor.rgb, texColor.a);
}

Animation and Transformations

In 3D scenes, objects often move, rotate, or scale over time or in response to user input. This is achieved by modifying the model matrix.

function animate(time) {
  time *= 0.001; // Convert to seconds

  // Update model matrix
  const rotation = quat.fromEuler(quat.create(), time * 30, time * 50, time * 20);
  const translation = vec3.create();
  vec3.set(translation, 0, 0, -5);
  const scaling = vec3.fromValues(1, 1, 1);

  const modelMatrix = mat4.create();
  mat4.translate(modelMatrix, modelMatrix, translation);
  mat4.rotate(modelMatrix, modelMatrix, quat.asMat4(rotation));
  mat4.scale(modelMatrix, modelMatrix, scaling);

  // ...update model matrices for other objects...

  gl.viewport(0, 0, canvas.width, canvas.height);
  gl.clearColor(0.2, 0.3, 0.3, 1.0);
  gl.clear(gl.COLOR_BUFFER_BIT | gl.DEPTH_BUFFER_BIT);

  shaderProgram.uniforms.u_modelMatrix.value = modelMatrix;
  // ...set other uniforms...

  mesh.draw(shaderProgram);

  requestAnimationFrame(animate);
}

Lights and Shadows

WebGL allows different lighting types to enhance 3D scenes, including point lights, directional lights, and spotlights. Shadows can be implemented using shadow maps.

Lights

Add light uniforms to shaders and compute lighting in the fragment shader:

// ...Vertex Shader...

// Fragment Shader
uniform vec3 u_ambientLight;
uniform vec3 u_directionalLightDirection;
uniform vec3 u_directionalLightIntensity;
uniform vec3 u_specularLight;
uniform vec3 u_shininess;

// ...other inputs...

void main() {
  vec3 lightDirection = normalize(u_directionalLightDirection);
  vec3 viewDirection = normalize(-v_normal);

  vec3 ambient = u_ambientLight;
  vec3 diffuse = u_directionalLightIntensity * max(dot(normal, lightDirection), 0.0);
  vec3 specular = u_specularLight * pow(max(dot(normal, reflect(lightDirection, normal)), 0.0), u_shininess);

  vec4 texColor = texture2D(u_texture, v_texCoord);
  vec3 litColor = ambient + diffuse + specular;

  gl_FragColor = vec4(litColor * texColor.rgb, texColor.a);
}

Shadow Maps

Shadow maps require an additional render target to store the scene’s depth from the light’s perspective. In the fragment shader, compare the pixel’s depth with the shadow map to determine occlusion:

// ...Fragment Shader...

uniform sampler2D u_shadowMap;
uniform vec2 u_shadowMapSize;
uniform mat4 u_lightSpaceMatrix;

void main() {
  // ...compute lighting...

  // Check shadow
  vec4 shadowCoord = u_lightSpaceMatrix * vec4(position, 1.0);
  shadowCoord /= shadowCoord.w;
  shadowCoord.z = shadowCoord.z * 0.5 + 0.5;
  float shadow = texture2D(u_shadowMap, shadowCoord.xy).r;
  shadow = shadow < shadowCoord.z ? 0.0 : 1.0;

  // ...apply shadow to lighting calculations...

  gl_FragColor = vec4(litColor * shadow * texColor.rgb, texColor.a);
}

Multiple Light Sources and Lighting Effects

In real-world scenarios, objects are illuminated by multiple light sources. WebGL supports this by adding more light uniforms and combining their effects in the fragment shader.

Adding Multiple Light Sources

Define uniforms for each light source in the shader:

uniform vec3 u_ambientLight0;
uniform vec3 u_directionalLight0Direction;
uniform vec3 u_directionalLight0Intensity;
uniform vec3 u_specularLight0;
uniform vec3 u_shininess0;

uniform vec3 u_ambientLight1;
uniform vec3 u_directionalLight1Direction;
uniform vec3 u_directionalLight1Intensity;
uniform vec3 u_specularLight1;
uniform vec3 u_shininess1;

// ...more light sources...

Computing Multiple Light Effects

Iterate over each light source in the fragment shader to compute their contributions:

void main() {
  vec3 ambient = vec3(0.0);
  vec3 diffuse = vec3(0.0);
  vec3 specular = vec3(0.0);

  // Light Source 0
  vec3 lightDirection0 = normalize(u_directionalLight0Direction);
  ambient += u_ambientLight0;
  diffuse += u_directionalLight0Intensity * max(dot(normal, lightDirection0), 0.0);
  specular += u_specularLight0 * pow(max(dot(normal, reflect(lightDirection0, normal)), 0.0), u_shininess0);

  // Light Source 1
  vec3 lightDirection1 = normalize(u_directionalLight1Direction);
  ambient += u_ambientLight1;
  diffuse += u_directionalLight1Intensity * max(dot(normal, lightDirection1), 0.0);
  specular += u_specularLight1 * pow(max(dot(normal, reflect(lightDirection1, normal)), 0.0), u_shininess1);

  // ...more light sources...

  vec4 texColor = texture2D(u_texture, v_texCoord);
  vec3 litColor = ambient + diffuse + specular;

  gl_FragColor = vec4(litColor * texColor.rgb, texColor.a);
}

Enhancing Lighting Effects

Beyond basic diffuse and specular highlights, effects like soft-edge highlights, ambient occlusion, and global illumination can enhance realism.

Dynamic Lighting and Shadows

Dynamic lighting and shadows update in real-time as objects and light sources move. In WebGL, this requires recalculating lighting and shadow maps each frame.

Dynamic Shadows

Dynamic shadows involve updating the shadow map every frame, adjusting for changes in light source position and direction:

// ...update light position and direction...

// Recalculate shadow map
renderSceneToShadowMap();

Lighting Animation

Lighting animations can be achieved by altering light position, direction, or intensity. For example, making a light orbit an object:

// ...compute new light position and direction...

Dynamic Lighting Effects

Advanced effects like dynamic reflections and refractions require complex shader algorithms, such as screen-space reflections (SSR) to simulate surface reflections.

Performance Considerations

Optimizing performance is critical when handling complex 3D scenes and effects. Key strategies include:

  • Reduce Vertex and Face Count: Simplify models or use Level of Detail (LOD) to lower geometry complexity.
  • Batch Rendering: Group geometries with the same material to reduce draw calls.
  • Instancing: Use instancing for multiple identical geometries to save GPU memory.
  • Shadow Map Resolution: Adjust shadow map resolution based on scene size and complexity.
  • Texture Filtering and Mapping: Avoid unnecessary texture sampling overhead.
  • Shader Optimization: Minimize shader computations and avoid unnecessary branches or loops.

WebGL Lighting and Materials

Implementing lighting and materials in WebGL involves complex shader programming, particularly using vertex shaders and fragment shaders to simulate lighting effects and material properties.

Basic Applications

Basic Concept of Diffuse Lighting

Diffuse lighting occurs when light hits a surface and scatters uniformly in all directions. Calculating diffuse lighting involves considering the light source’s color, the material’s color, and the angle between the light direction and the surface normal.

Simple Application Example

Vertex Shader

attribute vec3 a_Position; // Vertex position
attribute vec3 a_Normal; // Vertex normal
uniform mat4 u_ModelMatrix; // Model transformation matrix
uniform mat4 u_ViewMatrix; // View transformation matrix
uniform mat4 u_ProjectionMatrix; // Projection transformation matrix
uniform vec3 u_LightPos; // Light source position
varying vec3 v_Normal; // Normal passed to fragment shader
varying vec3 v_Position; // Position passed to fragment shader

void main() {
    gl_Position = u_ProjectionMatrix * u_ViewMatrix * u_ModelMatrix * vec4(a_Position, 1.0);
    v_Normal = mat3(u_ModelMatrix) * a_Normal; // Transform normal to world space
    v_Position = vec3(u_ModelMatrix * vec4(a_Position, 1.0)); // Transform position to world space
}

Fragment Shader

precision mediump float;

uniform vec3 u_LightColor; // Light source color
uniform vec3 u_AmbientLight; // Ambient light color
uniform vec3 u_MaterialDiffuse; // Material diffuse color
varying vec3 v_Normal; // Normal from vertex shader
varying vec3 v_Position; // Position from vertex shader

void main() {
    // Calculate light direction vector (from vertex to light source)
    vec3 lightDir = normalize(u_LightPos - v_Position);

    // Calculate diffuse lighting intensity (dot product of normal and light direction)
    float diffuseFactor = max(dot(v_Normal, lightDir), 0.0);

    // Final color = ambient light + diffuse light
    vec3 finalColor = u_AmbientLight * u_MaterialDiffuse + u_LightColor * u_MaterialDiffuse * diffuseFactor;

    gl_FragColor = vec4(finalColor, 1.0); // Output final color
}

Explanation

  • In the vertex shader, the vertex position and normal are transformed to world space for subsequent lighting calculations.
  • In the fragment shader, the diffuse lighting intensity is computed using the dot product of the light direction and the vertex normal. The dot product ranges from [-1, 1], and max() ensures a positive result, indicating areas facing the light receive diffuse lighting.
  • u_LightColor, u_AmbientLight, and u_MaterialDiffuse are uniform variables representing the light source color, ambient light color, and material diffuse color, respectively. These are set in JavaScript and passed to the shader.
  • gl_FragColor outputs the computed color to the screen.

Extended Applications

Multiple Light Sources

The example above handles a single light source. Real-world scenes often involve multiple lights, each with its own position, color, and intensity. To support multiple lights, add corresponding calculations and variables in the shaders. For example, define a light source struct and pass multiple instances from JavaScript:

struct LightSource {
    vec3 position;
    vec3 color;
};

uniform LightSource u_Lights[3]; // Up to three light sources

Specular Reflection

Specular reflection simulates highlights, typically using the Phong lighting model. Compute the dot product of the view direction and the reflection direction, raised to a power to simulate shininess:

vec3 viewDir = normalize(-v_Position);
vec3 reflectDir = reflect(-lightDir, v_Normal);
float specularFactor = pow(max(dot(viewDir, reflectDir), 0.0), shininess);

Texture Mapping

Material colors can come from textures, sampled using the texture2D() function. Pass texture coordinates from the vertex shader to the fragment shader:

attribute vec2 a_TexCoord; // Texture coordinates
varying vec2 v_TexCoord; // Passed to fragment shader

void main() {
    ...
    v_TexCoord = a_TexCoord;
    ...
}
uniform sampler2D u_Texture; // Texture sampler

void main() {
    ...
    vec3 diffuseColor = texture2D(u_Texture, v_TexCoord).rgb * u_MaterialDiffuse;
    ...
}

Environment Mapping

Environment mapping simulates reflections of the surrounding environment, often using cube maps:

// Environment mapping in fragment shader
uniform samplerCube u_EnvMap; // Cube map sampler
...
vec3 envReflection = textureCube(u_EnvMap, reflect(normalize(v_Position), v_Normal)).rgb;

Cube Maps

Cube maps consist of six square images corresponding to the six axis directions, used to simulate the environment:

// Load cube map in JavaScript
var cubeTexture = gl.createTexture();
gl.bindTexture(gl.TEXTURE_CUBE_MAP, cubeTexture);

for (var i = 0; i < 6; i++) {
    var face = gl.TEXTURE_CUBE_MAP_POSITIVE_X + i;
    gl.texImage2D(face, 0, gl.RGB, width, height, 0, gl.RGB, gl.UNSIGNED_BYTE, faceData[i]);
    // faceData[i] is image data for each face
}

gl.texParameteri(gl.TEXTURE_CUBE_MAP, gl.TEXTURE_WRAP_S, gl.CLAMP_TO_EDGE);
gl.texParameteri(gl.TEXTURE_CUBE_MAP, gl.TEXTURE_WRAP_T, gl.CLAMP_TO_EDGE);
gl.texParameteri(gl.TEXTURE_CUBE_MAP, gl.TEXTURE_MIN_FILTER, gl.LINEAR);
gl.texParameteri(gl.TEXTURE_CUBE_MAP, gl.TEXTURE_MAG_FILTER, gl.LINEAR);

Shadow Mapping

Shadow maps simulate shadows cast by objects under light sources, requiring additional textures and shader calculations to determine if a pixel is in shadow:

// Assuming u_ShadowMap is the shadow map sampler, v_LightSpacePosition is the object’s position in light space
uniform sampler2D u_ShadowMap;
...
vec4 shadowCoord = u_LightSpaceMatrix * vec4(v_Position, 1.0);
if (shadowCoord.z > texture2D(u_ShadowMap, shadowCoord.xy / shadowCoord.w).r) {
    // Pixel is in shadow, reduce brightness
    finalColor *= 0.5; // Example, actual shadow effects are more complex
}

Blended Materials

Some materials combine multiple lighting models, such as metal and plastic. Implement different lighting calculations for each material type:

// Assuming two material colors and lighting calculations
vec3 metalColor = calculateMetallicMaterial(u_LightPos, v_Normal, ...);
vec3 plasticColor = calculatePlasticMaterial(u_LightPos, v_Normal, ...);

// Blend materials based on texture or other factors
vec3 finalColor = mix(metalColor, plasticColor, metallicFactor);

Light Maps

Light maps store precomputed lighting information in textures, reducing runtime calculations. During baking, lighting is simulated and stored per pixel, then applied at runtime:

// Light map sampling in fragment shader
uniform sampler2D u_LightMap;
...
vec3 bakedLight = texture2D(u_LightMap, v_TexCoord).rgb;
finalColor += bakedLight;

Normal Mapping

Normal maps simulate fine surface details without extra geometry, storing per-pixel normal directions. The fragment shader adjusts lighting calculations based on these normals:

uniform sampler2D u_NormalMap; // Normal map sampler

void main() {
    ...
    vec3 normal = normalize(texture2D(u_NormalMap, v_TexCoord).rgb * 2.0 - 1.0);
    vec3 transformedNormal = normalize(mat3(u_ModelMatrix) * normal);
    ...
}

Environment Mapping

Environment mapping simulates surface reflections using techniques like cube map environment mapping or spherical harmonics:

// Fragment shader
uniform samplerCube u_EnvMap; // Environment map sampler
...
vec3 envReflection = textureCube(u_EnvMap, reflect(normalize(v_Position), v_Normal)).rgb;

Real-Time Shadows

Real-time shadows, achieved via shadow maps or volumetric shadows, allow dynamic shadow casting as objects move or rotate:

// Fragment shader
uniform sampler2DShadow u_ShadowMap; // Shadow map sampler
...
vec4 shadowCoord = u_LightSpaceMatrix * vec4(v_Position, 1.0);
float shadow = shadow2DProj(u_ShadowMap, shadowCoord);
if (shadow > 0.0) {
    // Object is in shadow, reduce lighting impact
    finalColor *= shadowFactor;
}

Physically Based Rendering (PBR)

PBR aims for realism using physical laws, incorporating specular reflections, Fresnel effects, and distinctions between metallic and non-metallic surfaces. It uses complex shading models like Blinn-Phong, Cook-Torrance, or physically based shading (PBS):

// PBR inputs
uniform vec3 u_DiffuseColor;
uniform vec3 u_Metallic;
uniform vec3 u_Roughness;
uniform vec3 u_AmbientOcclusion;
uniform sampler2D u_MetallicRoughnessMap;
uniform sampler2D u_NormalMap;
...

// Compute PBR lighting
vec3 baseColor = texture(u_MetallicRoughnessMap, v_TexCoord).rgb;
vec3 F0 = mix(vec3(0.04), baseColor.rgb, u_Metallic);
float roughness = u_Roughness;
vec3 N = normalize(texture(u_NormalMap, v_TexCoord).rgb * 2.0 - 1.0);

// Other lighting calculations...

GPU Particle Systems

GPU-based particle systems handle large numbers of particles for effects like fire, smoke, or water. They have dedicated rendering pipelines for generation, updating, collision detection, and destruction:

// Particle color and position stored in texture
uniform sampler2D u_ParticleTex; // Particle texture
uniform vec2 u_ParticleSize; // Size of a single particle
uniform vec2 u_ParticleCount; // Total number of particles
...

// Retrieve particle properties via texture coordinates
vec2 uv = gl_FragCoord.xy / u_ParticleSize;
int index = int(uv.x * u_ParticleCount.x + uv.y * u_ParticleCount.y);

// Get particle color and position
vec4 particleData = texelFetch(u_ParticleTex, ivec2(index), 0);
vec3 particleColor = particleData.rgb;
vec3 particlePos = particleData.a * vec3(uv, 1.0);

// Compute final particle color and position
...

Screen-Space Post-Processing Effects

Effects like anti-aliasing, depth of field, blur, tone mapping, and color correction are applied in screen space after geometry rendering:

// Fragment shader
uniform sampler2D u_ScreenTexture; // Rendered screen texture
const float edgeThreshold = 0.1;

vec4 fragColor = texture2D(u_ScreenTexture, v_TexCoord);
vec4 leftColor = texture2D(u_ScreenTexture, vec2(v_TexCoord.x - 1.0/width, v_TexCoord.y));
vec4 rightColor = texture2D(u_ScreenTexture, vec2(v_TexCoord.x + 1.0/width, v_TexCoord.y));
vec4 upColor = texture2D(u_ScreenTexture, vec2(v_TexCoord.x, v_TexCoord.y + 1.0/height));
vec4 downColor = texture2D(u_ScreenTexture, vec2(v_TexCoord.x, v_TexCoord.y - 1.0/height));

float edge = abs(fragColor.r - leftColor.r) + abs(fragColor.g - leftColor.g) + abs(fragColor.b - leftColor.b) +
             abs(fragColor.r - rightColor.r) + abs(fragColor.g - rightColor.g) + abs(fragColor.b - rightColor.b) +
             abs(fragColor.r - upColor.r) + abs(fragColor.g - upColor.g) + abs(fragColor.b - upColor.b) +
             abs(fragColor.r - downColor.r) + abs(fragColor.g - downColor.g) + abs(fragColor.b - downColor.b);

edge = smoothstep(edgeThreshold, edgeThreshold + 0.01, edge);
fragColor.rgb = edge * fragColor.rgb; // Retain only edge colors

gl_FragColor = fragColor;

Performance Monitoring and Debugging

Use tools like WebGL Inspector or Profiler to monitor GPU performance and identify bottlenecks. In browsers, the Performance panel in developer tools can track WebGL performance, such as rendering time and GPU memory usage. In code, use console.time and console.timeEnd to measure specific operations:

console.time('render');
drawScene();
console.timeEnd('render');

WebGL2 Extensions

WebGL2 introduces features like floating-point textures, multisample anti-aliasing, texture arrays, depth textures, and stencil buffers, enabling more complex lighting and material effects. Check and enable WebGL2 extensions in JavaScript:

var gl = canvas.getContext('webgl2', {antialias: false}); // Get WebGL2 context
if (!gl) {
  alert('WebGL2 not supported');
  return;
}

// Check and enable EXT_color_buffer_float extension
var ext = gl.getExtension('EXT_color_buffer_float');
if (!ext) {
  alert('EXT_color_buffer_float not supported');
} else {
  // Use extension
}

WebGL Shadows and Post-Processing

Basic Post-Processing

Shadow Mapping

Shadow mapping is a common technique for real-time shadows. It renders the scene’s depth information from the light’s perspective and uses this data during the main render to determine if pixels are in shadow. The simplified process is:

  1. Generate a shadow map (depth texture) from the light’s perspective.
  2. In the main render, use the shadow map to check if pixels are in shadow.

Generating a Shadow Map

// Create shadow map texture
var shadowMap = gl.createTexture();
gl.bindTexture(gl.TEXTURE_2D, shadowMap);
// Set texture parameters
// ...

// Create framebuffer object
var shadowFBO = gl.createFramebuffer();
gl.bindFramebuffer(gl.FRAMEBUFFER, shadowFBO);
// Attach shadow map as a render target
// ...

// Render the scene from the light’s perspective, saving depth to the shadow map
// ...

Using Shadow Map in Main Render

// Fragment Shader
uniform sampler2D u_ShadowMap; // Shadow map sampler
uniform mat4 u_LightSpaceMatrix; // Transformation matrix from world to light space

vec4 shadowCoord = u_LightSpaceMatrix * vec4(worldPosition, 1.0);
float shadow = shadow2D(u_ShadowMap, shadowCoord.xy / shadowCoord.w);

if (shadow > 0.0) {
    // Pixel is in shadow, reduce brightness
    finalColor *= shadowFactor;
}

Post-Processing

Post-processing involves additional pixel manipulation in screen space after the main render. Below is an example of a simple edge detection effect:

// Create post-processing framebuffer object
var postProcessFBO = gl.createFramebuffer();
gl.bindFramebuffer(gl.FRAMEBUFFER, postProcessFBO);
// Create floating-point texture for storing results
// ...

// Render to post-processing FBO
// ...

// Read from FBO and apply effect
gl.bindFramebuffer(gl.FRAMEBUFFER, null);
gl.drawArrays(gl.TRIANGLES, 0, 6); // Draw full-screen triangle

Post-Processing Shader

// Fragment Shader
uniform sampler2D u_ScreenTexture; // Main render screen texture
const float edgeThreshold = 0.1;

vec4 fragColor = texture2D(u_ScreenTexture, v_TexCoord);
// Edge detection calculation
// ...

// Apply edge detection result
if (edge > edgeThreshold) {
    fragColor.rgb = vec3(1.0, 0.0, 0.0); // Highlight edges
} else {
    fragColor.rgb = fragColor.rgb; // Keep original color
}

gl_FragColor = fragColor;

Complex Post-Processing Effects

Complex post-processing effects in WebGL often involve multiple steps, such as chaining multiple framebuffer objects. Below is an example of implementing blur and tonemapping effects:

  1. Render the main scene to the first framebuffer object.
  2. Read from the first framebuffer, apply horizontal blur, and store in the second framebuffer.
  3. Read from the second framebuffer, apply vertical blur, and store back in the first framebuffer.
  4. Read from the first framebuffer, apply tonemapping, and output to the screen.
// Initialize framebuffer objects and textures
var fbo1 = createFramebuffer();
var fbo2 = createFramebuffer();
var texture1 = createTexture();
var texture2 = createTexture();

// Render main scene to fbo1
renderMainSceneTo(fbo1, texture1);

// Horizontal blur
renderTo(fbo2, texture1, horizontalBlurShader);
// Vertical blur
renderTo(fbo1, texture2, verticalBlurShader);

// Tonemapping
renderToScreen(fbo1, toneMappingShader);

Blur Shader

Blur is typically implemented using a convolution filter. Below is a simple 5×5 Gaussian blur example:

// Horizontal Blur Shader
uniform sampler2D u_Texture; // Input texture
uniform vec2 u_TexelSize; // Size of one texel

vec4 blurHorizontal() {
    vec4 sum = vec4(0.0);
    for (int i = -2; i <= 2; i++) {
        vec2 offset = vec2(i, 0) * u_TexelSize;
        sum += texture2D(u_Texture, v_TexCoord + offset);
    }
    return sum * 0.125; // Gaussian weight
}

// Fragment Shader
void main() {
    gl_FragColor = blurHorizontal();
}

Tonemapping Shader

Tonemapping adjusts the dynamic range of an image to fit the display’s capabilities. Below is a simple Reinhard tonemapping example:

// Fragment Shader
uniform sampler2D u_Texture; // Input texture
uniform float u_Exposure; // Exposure value

vec3 reinhardTonemap(vec3 x) {
    return x / (1.0 + x);
}

void main() {
    vec3 color = texture2D(u_Texture, v_TexCoord).rgb * u_Exposure;
    gl_FragColor.rgb = reinhardTonemap(color);
    gl_FragColor.a = 1.0;
}

Advanced Visual Effects

WebGL supports advanced visual effects like SSAO (Screen-Space Ambient Occlusion), DOF (Depth of Field), and Bloom.

1. SSAO (Screen-Space Ambient Occlusion)

SSAO approximates shadows on object surfaces, enhancing scene depth. Below is a simple SSAO shader example:

// Fragment Shader
uniform sampler2D u_DepthMap; // Depth texture
uniform sampler2D u_NormalMap; // Normal texture
uniform vec2 u_SSAOKernel[64]; // Random vectors
uniform float u_SampleRadius; // Sample radius
uniform float u_AOStrength; // AO strength

float calcAO(vec2 coord, vec3 normal, vec3 viewDir) {
    float ao = 0.0;
    for (int i = 0; i < 64; i++) {
        vec3 randomVec = normalize(u_SSAOKernel[i] * 2.0 - 1.0);
        vec3 tangent = normalize(randomVec * normal.zxy - normal * randomVec.zxy);
        vec3 bitangent = cross(normal, tangent);
        mat3 TBN = mat3(tangent, bitangent, normal);
        vec3 samplePos = TBN * (coord * u_SampleRadius) + viewDir;
        samplePos = normalize(samplePos);
        float depth = texture2D(u_DepthMap, samplePos.xy).r;
        float rangeCheck = smoothstep(0.0, 1.0, u_SampleRadius / (depth - view.z));
        ao += depth >= dot(viewPos, viewDir) ? rangeCheck : 0.0;
    }
    ao /= 64.0;
    return ao;
}

void main() {
    vec3 normal = texture2D(u_NormalMap, v_TexCoord).rgb * 2.0 - 1.0;
    vec3 viewDir = normalize(-v_Position);
    float ao = calcAO(v_TexCoord, normal, viewDir);
    gl_FragColor.rgb = vec3(ao) * u_AOStrength;
    gl_FragColor.a = 1.0;
}

2. DOF (Depth of Field)

DOF simulates a camera’s focal distance, blurring objects outside the focus range. Below is a simple DOF shader example:

// Fragment Shader
uniform sampler2D u_Texture; // Original texture
uniform sampler2D u_BlurTexture; // Blurred texture
uniform vec2 u_FocusRange; // Focus range
uniform float u_BlurSize; // Blur intensity

float getDistanceWeight(float dist) {
    float d = abs(dist - 0.5);
    return smoothstep(u_FocusRange.x, u_FocusRange.y, d);
}

void main() {
    vec2 uv = v_TexCoord;
    vec4 sharp = texture2D(u_Texture, uv);
    vec4 blurred = texture2D(u_BlurTexture, uv);
    float focusWeight = getDistanceWeight(sharp.r);
    gl_FragColor = mix(sharp, blurred, focusWeight * u_BlurSize);
}

3. Bloom

Bloom simulates overexposure from bright light sources, creating a glow effect. Below is a simple Bloom shader example:

// Fragment Shader
uniform sampler2D u_Texture; // Original texture
uniform sampler2D u_Highlights; // Highlight texture
uniform float u_BloomIntensity; // Bloom intensity

void main() {
    vec4 color = texture2D(u_Texture, v_TexCoord);
    vec4 bloom = texture2D(u_Highlights, v_TexCoord);
    gl_FragColor.rgb = color.rgb + bloom.rgb * u_BloomIntensity;
    gl_FragColor.a = color.a;
}

WebGL Animation and Interaction

WebGL animation and interaction typically involve JavaScript timers, matrix transformations, and handling user input events.

Animation

Animations in WebGL are achieved by continuously updating a model’s transformation matrix and re-rendering the scene. This often uses the requestAnimationFrame function to create an animation loop:

function animate() {
    requestAnimationFrame(animate); // Create animation loop

    // Update model transformation matrix, e.g., rotation
    var rotation = getRotation(); // Returns current rotation angle
    modelMatrix = rotate(modelMatrix, rotation, [0, 1, 0]); // Rotate matrix

    // Render scene
    renderScene(modelMatrix);
}

// Start animation
animate();

The getRotation() function may return a rotation angle based on time or user input, rotate() is a matrix rotation operation, and renderScene() renders the entire scene.

Interaction

Interaction in WebGL typically involves handling browser DOM events, such as mouse clicks and movements. These events need to be converted into 3D space coordinates for scene interaction:

canvas.addEventListener('mousemove', function(event) {
    var rect = canvas.getBoundingClientRect();
    var x = event.clientX - rect.left;
    var y = event.clientY - rect.top;

    // Convert screen coordinates to normalized device coordinates
    var ndcX = (2.0 * x) / canvas.width - 1.0;
    var ndcY = 1.0 - (2.0 * y) / canvas.height;

    // Convert normalized device coordinates to view-space coordinates
    var viewport = [0, 0, canvas.width, canvas.height];
    var viewSpacePoint = unproject(viewport, ndcX, ndcY, 0.0, 1.0);

    // Interact with the scene using view-space coordinates
    handleInteraction(viewSpacePoint);
});

The unproject() function converts screen coordinates to a 3D point, and handleInteraction() processes this point for actions like selecting objects or moving them.

In libraries like Three.js, interaction and animation are simpler due to built-in event handling and animation systems. For example, Three.js provides Raycaster for detecting mouse clicks on 3D objects and Object3D.rotateOnAxis() for rotation animations.

Drag and Drop

Implementing drag-and-drop in WebGL involves listening for mousedown, mousemove, and mouseup events, along with 3D coordinate conversions:

let isDragging = false;
let dragStartPos = null;

canvas.addEventListener('mousedown', function(event) {
    dragStartPos = get3DPointFromEvent(event);
    isDragging = true;
});

canvas.addEventListener('mousemove', function(event) {
    if (isDragging) {
        var currentPos = get3DPointFromEvent(event);
        var delta = subtract(currentPos, dragStartPos);
        // Update dragged object’s position
        updateObjectPosition(delta);
    }
});

canvas.addEventListener('mouseup', function(event) {
    isDragging = false;
});

function get3DPointFromEvent(event) {
    // Similar to previous screen-to-3D coordinate conversion
}

function updateObjectPosition(delta) {
    // Update object position based on delta
}

Touch Events

For touch devices, use touchstart, touchmove, and touchend events to handle touch interactions:

let touchStartPos = null;

canvas.addEventListener('touchstart', function(event) {
    event.preventDefault();
    touchStartPos = get3DPointFromTouchEvent(event.touches[0]);
});

canvas.addEventListener('touchmove', function(event) {
    if (touchStartPos) {
        var currentPos = get3DPointFromTouchEvent(event.touches[0]);
        var delta = subtract(currentPos, touchStartPos);
        // Update dragged object’s position
        updateObjectPosition(delta);
    }
});

canvas.addEventListener('touchend', function(event) {
    touchStartPos = null;
});

function get3DPointFromTouchEvent(touch) {
    // Similar to screen-to-3D conversion, using touch event coordinates
}

Multi-Touch

For devices supporting multi-touch, handle multiple touch points using the touches array in touchstart, touchmove, and touchend events:

let touchPoints = [];

canvas.addEventListener('touchstart', function(event) {
    event.preventDefault();
    for (let i = 0; i < event.changedTouches.length; i++) {
        let touch = event.changedTouches[i];
        touchPoints.push({ id: touch.identifier, pos: get3DPointFromTouchEvent(touch) });
    }
});

canvas.addEventListener('touchmove', function(event) {
    for (let i = 0; i < event.changedTouches.length; i++) {
        let touch = event.changedTouches[i];
        let touchIndex = findTouchIndexById(touch.identifier);
        if (touchIndex !== -1) {
            let oldPos = touchPoints[touchIndex].pos;
            let newPos = get3DPointFromTouchEvent(touch);
            let delta = subtract(newPos, oldPos);
            // Update object position for this touch point
            updateObjectPosition(delta, touchIndex);
        }
    }
});

canvas.addEventListener('touchend', function(event) {
    for (let i = 0; i < event.changedTouches.length; i++) {
        let touch = event.changedTouches[i];
        let touchIndex = findTouchIndexById(touch.identifier);
        if (touchIndex !== -1) {
            touchPoints.splice(touchIndex, 1);
        }
    }
});

function findTouchIndexById(id) {
    for (let i = 0; i < touchPoints.length; i++) {
        if (touchPoints[i].id === id) {
            return i;
        }
    }
    return -1;
}

function updateObjectPosition(delta, touchIndex) {
    // Update object position for the touch point
}

Gesture Recognition

Gesture recognition involves analyzing continuous touch point changes to identify specific gestures like rotation, scaling, or swiping. Below is a simplified gesture recognition framework:

let gestureState = {
    mode: 'none',
    startPositions: [],
    currentPositions: []
};

canvas.addEventListener('touchstart', function(event) {
    event.preventDefault();
    for (let i = 0; i < event.changedTouches.length; i++) {
        gestureState.startPositions.push(get2DPointFromTouchEvent(event.changedTouches[i]));
    }
    gestureState.currentPositions = gestureState.startPositions.slice();
});

canvas.addEventListener('touchmove', function(event) {
    event.preventDefault();
    gestureState.currentPositions = [];
    for (let i = 0; i < event.changedTouches.length; i++) {
        gestureState.currentPositions.push(get2DPointFromTouchEvent(event.changedTouches[i]));
    }
    recognizeGesture();
});

canvas.addEventListener('touchend', function(event) {
    gestureState.startPositions = [];
    gestureState.currentPositions = [];
});

function recognizeGesture() {
    switch (gestureState.currentPositions.length) {
        case 1:
            // Single-finger swipe
            break;
        case 2:
            // Two-finger scale or rotate
            if (isScaling()) {
                gestureState.mode = 'scale';
                // Handle scaling
            } else if (isRotating()) {
                gestureState.mode = 'rotate';
                // Handle rotation
            }
            break;
        default:
            // More complex gestures
            break;
    }

    if (gestureState.mode === 'scale') {
        // Process scaling
    } else if (gestureState.mode === 'rotate') {
        // Process rotation
    }
}

function isScaling() {
    // Calculate distance change between two fingers
}

function isRotating() {
    // Calculate angle change between two fingers
}

function get2DPointFromTouchEvent(touch) {
    // Convert touch event coordinates to 2D screen coordinates
}

Swipe Gestures

Swipe gestures are often used for navigation or scene transitions. Below is a basic swipe gesture detection:

let swipeStartPos = null;

canvas.addEventListener('mousedown', function(event) {
    swipeStartPos = get2DPointFromEvent(event);
});

canvas.addEventListener('mousemove', function(event) {
    if (swipeStartPos) {
        var currentPos = get2DPointFromEvent(event);
        var distance = length(subtract(currentPos, swipeStartPos));
        if (distance > SWIPE_THRESHOLD) {
            // Trigger swipe gesture
            triggerSwipe(currentPos);
            swipeStartPos = null;
        }
    }
});

function triggerSwipe(endPos) {
    // Handle swipe gesture, e.g., switch scenes or scroll
}

Zoom and Pan

Zooming and panning are often tied to multi-touch, such as two-finger zooming or single-finger panning. Below is a simplified implementation:

let scaleCenter = null;
let translateStartPos = null;

canvas.addEventListener('touchstart', function(event) {
    if (event.touches.length === 2) {
        scaleCenter = get2DPointFromEvent(event.touches[0], event.touches[1]);
    } else if (event.touches.length === 1) {
        translateStartPos = get2DPointFromEvent(event.touches[0]);
    }
});

canvas.addEventListener('touchmove', function(event) {
    if (event.touches.length === 2) {
        var currentCenter = get2DPointFromEvent(event.touches[0], event.touches[1]);
        var scaleDelta = distance(currentCenter, scaleCenter);
        // Update zoom
        updateScale(scaleDelta);
    } else if (event.touches.length === 1 && translateStartPos) {
        var currentPos = get2DPointFromEvent(event.touches[0]);
        var translateDelta = subtract(currentPos, translateStartPos);
        // Update panning
        updateTranslation(translateDelta);
    }
});

function updateScale(delta) {
    // Update scene zoom based on delta
}

function updateTranslation(delta) {
    // Update scene panning based on delta
}

Keyboard Controls

Keyboard controls are common in games or interactive applications, allowing users to control scene objects via key presses. Below is a simple keyboard event handling example:

let keyStates = {};

document.addEventListener('keydown', function(event) {
    keyStates[event.key] = true;
});

document.addEventListener('keyup', function(event) {
    keyStates[event.key] = false;
});

function update() {
    if (keyStates['ArrowUp']) {
        // Move object up
    } else if (keyStates['ArrowDown']) {
        // Move object down
    }
    // Handle other keys...
}

Gamepad Controls

Game controllers (e.g., gamepads) offer richer interaction, especially for games. Below is a basic example using the Gamepad API:

let gamepad;

function update() {
    if (navigator.getGamepads) {
        var pads = navigator.getGamepads();
        for (var i = 0; i < pads.length; i++) {
            if (pads[i]) {
                gamepad = pads[i];
                break;
            }
        }
    }

    if (gamepad) {
        if (gamepad.buttons[0].pressed) {
            // A button pressed, perform action
        }
        if (gamepad.axes[1] < -0.5) {
            // Left joystick left, move object left
        }
        // Handle other axes and buttons...
    }
}

Virtual Reality (VR) and Augmented Reality (AR) Interaction

WebGL can integrate with WebVR or WebXR APIs for virtual reality and augmented reality interactions. Below is a simple VR interaction example:

let vrDisplay;

function initVR() {
    if (navigator.getVRDisplays) {
        navigator.getVRDisplays().then(displays => {
            if (displays.length > 0) {
                vrDisplay = displays[0];
                vrDisplay.requestAnimationFrame(render);
            }
        });
    }
}

function render() {
    if (vrDisplay) {
        vrDisplay.getFrameData(frameData);
        // Update scene and perspective using frameData
        vrDisplay.requestAnimationFrame(render);
    } else {
        // Fallback rendering without VR
    }
}

For augmented reality, WebGL can work with AR libraries like AR.js or A-Frame, using the camera and markers for AR interactions.

WebGL Performance Analysis and Optimization

WebGL performance analysis and optimization are critical for ensuring smooth application performance. This includes reducing draw calls, managing memory, optimizing shader code, and leveraging hardware features.

Reducing Draw Calls

Batching

Combine multiple objects into a single geometry to reduce independent draw calls. This can be achieved by merging objects with the same material or using instancing techniques.

// Use instancing to draw multiple identical objects
const geometry = new THREE.BoxGeometry();
const material = new THREE.MeshBasicMaterial({ color: 0x00ff00 });
const instances = 100;
const mesh = new THREE.InstancedMesh(geometry, material, instances);
for (let i = 0; i < instances; i++) {
    const position = new THREE.Vector3(Math.random() - 0.5, Math.random() - 0.5, Math.random() - 0.5);
    mesh.setMatrixAt(i, new THREE.Matrix4().makeTranslation(position.x, position.y, position.z));
}
scene.add(mesh);

Optimizing Shaders

Reducing Conditional Branching

Avoid complex conditional statements in shaders, as they can cause GPU pipeline branch prediction failures, lowering performance.

// Avoid complex conditional branching
vec3 diffuseColor = mix(color1, color2, step(0.5, v_Uv.y));

Using Lower-Precision Data Types

Use lowp or mediump precision data types when precision loss is acceptable to reduce memory usage and bandwidth demands.

precision mediump float;

Managing Memory

Releasing Resources Promptly

Ensure unused textures, buffers, and other resources are released promptly.

texture.dispose();
geometry.dispose();
material.dispose();

Leveraging Hardware Features

Texture Compression

Use texture compression based on the user’s hardware support to reduce memory usage and loading times.

const loader = new THREE.TextureLoader();
loader.setDataType(THREE.UnsignedByteType);
loader.setCompress(true); // Assuming such a setting exists to enable compression
const texture = loader.load('path/to/texture.jpg');

Using Performance Analysis Tools

Chrome DevTools

Use the Performance panel in Chrome DevTools to record and analyze WebGL calls, identifying bottlenecks.

// Start performance recording in Chrome DevTools
Ctrl + Shift + E (Windows/Linux)
Cmd + Option + E (Mac)

WebGL Inspector

Although WebGL Inspector is no longer maintained, tools like WebGL Insights can provide deep understanding of WebGL call performance.

Enabling Hardware Acceleration

Ensure the browser supports and enables hardware acceleration, which is critical for boosting WebGL performance.

// Typically handled automatically by the browser, but ensure users haven’t disabled hardware acceleration

Vertex Sharing

Using Index Buffers and Vertex Arrays

Use index buffers and vertex arrays to reduce memory usage and optimize rendering performance. For example, triangles sharing vertices can reuse vertex data instead of duplicating it.

// Create a geometry with shared vertices
const vertices = [
    // Triangle 1
    -1, -1,
    1, -1,
    -1, 1,

    // Triangle 2
    1, -1,
    1, 1,
    -1, 1
];

const indices = [0, 1, 2, 3, 4, 5];

const geometry = new THREE.BufferGeometry();
geometry.setAttribute('position', new THREE.Float32BufferAttribute(vertices, 2));
geometry.setIndex(new THREE.Uint16BufferAttribute(indices, 1));

const material = new THREE.MeshBasicMaterial({ color: 0x00ff00 });
const mesh = new THREE.Mesh(geometry, material);
scene.add(mesh);

Optimizing Texture Usage

Texture Atlases

Combine multiple small textures into a single large texture (texture atlas) to reduce texture-switching overhead.

// Load texture atlas
const atlasLoader = new THREE.TextureLoader();
const atlasTexture = atlasLoader.load('path/to/atlas.png');

// Create texture coordinate mapping
const textureCoordinates = [
    // Coordinates for small texture 1
    new THREE.Vector2(0, 0),
    new THREE.Vector2(1, 0),
    new THREE.Vector2(0, 1),

    // Coordinates for small texture 2
    new THREE.Vector2(1, 0),
    new THREE.Vector2(1, 1),
    new THREE.Vector2(0, 1)
];

// Set texture coordinates for geometry
geometry.setAttribute('uv', new THREE.Float32BufferAttribute(textureCoordinates, 2));

Mipmaps

Enable mipmaps to improve texture quality at different scales and optimize texture sampling.

const texture = new THREE.TextureLoader().load('path/to/texture.jpg');
texture.generateMipmaps = true;
texture.minFilter = THREE.LinearMipmapLinearFilter;

Using Deferred Rendering (Light Pre-Pass)

In some cases, deferred rendering can improve lighting computation efficiency, especially with many dynamic light sources.

// Create deferred renderer
const renderer = new THREE.WebGLRenderer({ antialias: true });
const gBufferRenderer = new THREE.GBufferRenderer(scene, camera, renderer);
const compositeMaterial = new THREE.CompositeMaterial();

// Rendering pipeline
gBufferRenderer.render(scene, camera);
renderer.render(gBufferRenderer.scene, gBufferRenderer.camera, compositeMaterial);

Note that deferred rendering requires a deep understanding of WebGL and may not suit all scenes. For simple scenes, forward rendering may be more efficient.

Optimizing WebGL Context

Choosing the Right Context

When creating a WebGL context, opt for webgl2 if available, as it offers more features and optimizations, such as floating-point textures and compute shaders.

const canvas = document.createElement('canvas');
const context = canvas.getContext('webgl2');

Optimizing Vertex and Fragment Shaders

Avoiding Global Variables

Global variables in shaders increase compilation and linking times. Use uniforms or attributes for state storage instead.

// Not recommended: Using global variables
uniform float time;
void main() {
    vec3 color = vec3(time);
    // ...
}

// Recommended: Using uniform
uniform float uTime;
void main() {
    vec3 color = vec3(uTime);
    // ...
}

Minimizing Computations

Reduce computations in shaders, especially in fragment shaders, as they execute per pixel. Optimize paths to avoid redundant multiplications and divisions.

// Not recommended: Redundant multiplications
vec3 color = vec3(vUv.x * 2.0 - 1.0, vUv.y * 2.0 - 1.0, 0.0);

// Recommended: Optimized computation
vec3 direction = vec3(vUv, 0.0) * 2.0 - vec3(1.0);

Precomputation and Baking

For static or near-static computations, such as lighting or normal maps, precompute results and store them in textures.

// Not recommended: Real-time lighting computation
vec3 light = normalize(lightPosition - vWorldPosition);
float diffuse = max(dot(normal, light), 0.0);

// Recommended: Bake lighting into texture
sampler2D bakedLightMap;
float diffuse = texture(bakedLightMap, vUv).r;

Using LOD (Level of Detail)

Dynamically adjust model detail based on distance from the camera to save rendering resources.

// Create LOD object
const lod = new THREE.LOD();

// Add different detail levels
lod.addLevel(boxGeometry1, 100);
lod.addLevel(boxGeometry2, 200);
lod.addLevel(boxGeometry3, 300);

// Set LOD object’s center
lod.position.copy(object.position);
scene.add(lod);

// Update LOD object
camera.updateProjectionMatrix();
lod.update(camera);

Leveraging Web Workers

Use Web Workers to handle compute-intensive tasks, such as geometry preprocessing or lighting calculations, in background threads to reduce main thread load.

// Create Web Worker
const worker = new Worker('worker.js');

// Communicate with Worker
worker.postMessage({ type: 'calculateLighting', data: sceneData });

worker.onmessage = function(event) {
    if (event.data.type === 'lightingCalculated') {
        // Apply computed results to scene
    }
};

Batch Rendering

Group objects with similar properties and render them together to reduce state changes and draw calls.

// Group objects by material
const groups = {};
for (const object of scene.children) {
    const material = object.material;
    if (!groups[material.uuid]) {
        groups[material.uuid] = { material, objects: [] };
    }
    groups[material.uuid].objects.push(object);
}

// Render groups
for (const group of Object.values(groups)) {
    renderer.setMaterial(group.material);
    for (const object of group.objects) {
        renderer.renderObject(object, camera);
    }
}

Using WebGL Profiler

Use WebGL Profilers (e.g., WebGL Insights or Chrome DevTools’ WebGL section) to monitor and analyze performance bottlenecks, guiding optimization efforts.

Share your love