Shaders

Shaders #

Shaders are programs that are used in computer graphics to control the rendering of visual effects and manipulate the appearance of objects and surfaces in a 3D scene. They are written in a specialized shading language, such as OpenGL Shading Language (GLSL) or High-Level Shading Language (HLSL), and executed on the GPU (Graphics Processing Unit) to perform real-time calculations.

Shaders are a fundamental part of modern computer graphics pipelines and are responsible for generating the colors, textures, lighting, and various visual effects seen in video games, animated movies, virtual reality applications, and other interactive visual experiences.

There are different types of shaders, each with its specific purpose:

  1. Vertex Shaders: These shaders operate on individual vertices of 3D models. They perform transformations such as translation, rotation, scaling, and projection to position the vertices correctly in 3D space. They can also calculate per-vertex lighting and pass data to other shaders.

  2. Pixel (Fragment) Shaders: Pixel shaders are responsible for determining the final color of each pixel or fragment of a rendered image. They calculate the color based on factors like lighting conditions, material properties, textures, and other visual effects. Pixel shaders can produce realistic lighting effects, shadows, reflections, refractions, and complex surface appearances.

  3. Geometry Shaders: Geometry shaders take incoming primitive shapes, such as points, lines, or triangles, and can generate new geometry or modify existing geometry. They allow for operations such as tessellation, particle system generation, or procedural geometry manipulation.

  4. Compute Shaders: Compute shaders are used for general-purpose computation on the GPU. They are not restricted to graphics tasks and can be used for tasks like physics simulations, data processing, or parallel computations that can benefit from the GPU’s parallel processing capabilities.

Shaders provide artists, designers, and developers with a powerful toolset to create visually stunning and immersive graphics in real-time applications. By manipulating the behavior of shaders, developers can achieve a wide range of visual effects and achieve the desired look and feel for their applications.

Types #

Coloring
Image Processing
Image Processing # 1. Introduction & Background # Digital image processing is the use of a digital computer to process digital images through an algorithm (Atalla & Kahng, n.d.). In webgl (i.e., GLSL ES) texturing is used to implement image processing (Charalambos, 2023). For this exercise, different effects were applied to an image and a video. Using image processing through texturing, and implementing convolutions to apply different masks or kernels.
Photomosaic
Photomosaic # 1. Introduction & Background # Photomosaic is a technique in visual computing that involves creating a larger image by assembling numerous smaller images called tiles. These tiles, often small photographs or images, are carefully chosen to replicate the colors and patterns of specific regions within the larger target image. When viewed from a distance, the tiles blend together to form a cohesive and visually appealing composition. Photomosaic has gained popularity as a creative and artistic way to transform images into stunning collages and visual representations.
Post Effects
Post Effects # 1. Introduction & Background # Implement some posteffects you find interesting. Convolution is a mathematical operation that combines two functions to produce a third function. In the context of image processing, convolution is used to apply a filter kernel to an image, which can be used to perform operations such as blurring, sharpening, edge detection, and more. In p5.js, convolution can be implemented using shaders, which are programs that run on the GPU and can perform highly parallelized operations on images.
Procedural Texturing
Procedural Texturing # 1. Introduction & Background # The goal in procedural texturing is to procedurally generate a texture using an algorithm in such a way that the result can be mapped onto a shape as a texture. Procedural texturing requires the use of a frame buffer object which in p5.js is implemented as a p5.Graphics object (Procedural Texturing, 2023). A framebuffer (frame buffer, or sometimes framestore) is a portion of random-access memory (RAM) containing a bitmap that drives a video display.
Spatial Coherence
Spatial Coherence # 1. Introduction & Background # Downsampling to convert an image to grayscale using shaders. Technique: To convert an image to grayscale using shaders, we use downsampling. So, it reduces the resolution of the image by averaging multiple pixels into a single pixel, resulting in a lower resolution image. This technique applied in the following way to convert an image to grayscale using shaders: First, the image is loaded into the shader program as a texture.
Texturing
Texture Sampling # 1. Introduction & Background # A texture map is an image applied (mapped) to the surface of a shape or polygon. This may be a bitmap image or a procedural texture. They may be stored in common image file formats, referenced by 3d model formats or material definitions, and assembled into resource bundles (Texture Mapping, n.d.). Texture sampling is the process of reading textures through the GPU.