Spatial Coherence

Spatial Coherence #

1. Introduction & Background #

Downsampling to convert an image to grayscale using shaders.

Technique: To convert an image to grayscale using shaders, we use downsampling. So, it reduces the resolution of the image by averaging multiple pixels into a single pixel, resulting in a lower resolution image. This technique applied in the following way to convert an image to grayscale using shaders:

First, the image is loaded into the shader program as a texture. The shader program reads the texture and downsamples the image by averaging the color values of adjacent pixels. The downsampled image is stored in a new texture with a lower resolution. The shader program then reads the downsampled image and applies a grayscale filter to each pixel. This is typically done by calculating the average value of the red, green, and blue color channels for each pixel and setting all three channels to this value. The resulting grayscale image is stored as a new texture or rendered directly to the screen.

It’s worth noting that downsampling can result in a loss of detail and information in the image.

Images that we use:

1. Retrato - Nadar.

  1. La joven de la perla - Johannes Vermeer

3. La Gioconda - Leonardo Da Vinci

2. Code & results #

Fragment shader code:
precision highp float;

uniform sampler2D tex0;
varying vec2 vTexCoord;

void main() {
  vec4 color = texture2D(tex0, vTexCoord);
  float gray = dot(color.rgb, vec3(0.299, 0.587, 0.114));
  gl_FragColor = vec4(vec3(gray), color.a);
}

Fragment shader code:

  • precision highp float;

This sets the precision for floating-point operations in the shader. In this case, highp means high precision.

  • uniform sampler2D tex0;

This declares a uniform variable tex0 of type sampler2D, which will be used to sample the input texture.

  • varying vec2 vTexCoord;

This declares a varying variable vTexCoord of type vec2, which will be used to pass the texture coordinates from the vertex shader to the fragment shader.

  • void main() {

This is the entry point for the fragment shader. This function is called once for each pixel in the output image.

  • vec4 color = texture2D(tex0, vTexCoord);

This samples the input texture tex0 at the texture coordinates specified by vTexCoord and assigns the resulting color value to color.

  • float gray = dot(color.rgb, vec3(0.299, 0.587, 0.114));

This calculates the grayscale value of the pixel using the dot product of the color vector (color.rgb) and a vector containing the weights for the red, green, and blue channels (vec3(0.299, 0.587, 0.114)). These weights are commonly used in image processing to create a luminance-based grayscale conversion.

  • gl_FragColor = vec4(vec3(gray), color.a);

This sets the output color for the current pixel to a new vec4 value, where the RGB channels are set to the grayscale value (vec3(gray)) and the alpha channel is set to the alpha value of the input color (color.a).

Vertex shader code
attribute vec3 aPosition;
attribute vec2 aTexCoord;

uniform mat4 uModelViewMatrix;
uniform mat4 uProjectionMatrix;

varying vec2 vTexCoord;

void main() {
  gl_Position = uProjectionMatrix * uModelViewMatrix * vec4(aPosition, 1.0);
  vTexCoord = aTexCoord;
}
Vertex shader code:

  • attribute vec3 aPosition;
  • attribute vec2 aTexCoord;

These lines declare two attribute variables: aPosition of type vec3, which stores the position of each vertex in the mesh, and aTexCoord of type vec2, which stores the texture coordinates for each vertex.

  • uniform mat4 uModelViewMatrix;
  • uniform mat4 uProjectionMatrix;

These lines declare two uniform variables: uModelViewMatrix of type mat4, which is a matrix that transforms vertices from object space to view space, and uProjectionMatrix of type mat4, which is a matrix that transforms vertices from view space to clip space.

  • varying vec2 vTexCoord;

This declares a varying variable vTexCoord of type vec2, which will be used to pass the texture coordinates from the vertex shader to the fragment shader.

  • void main() {

This is the entry point for the vertex shader. This function is called once for each vertex in the mesh.

  • gl_Position = uProjectionMatrix * uModelViewMatrix * vec4(aPosition, 1.0);

This line calculates the position of the vertex in clip space by multiplying the vertex position in object space (aPosition) by the model-view matrix (uModelViewMatrix), then by the projection matrix (uProjectionMatrix), and finally by a homogeneous coordinate vec4(aPosition, 1.0). The resulting clip-space position is assigned to gl_Position, a built-in variable that represents the position of the vertex in clip space.

  • vTexCoord = aTexCoord;

This line assigns the texture coordinates for the current vertex (aTexCoord) to vTexCoord, which will be passed on to the fragment shader for texture sampling.

Applying the code, the image get the following output

1. Retrato - Nadar.

  1. La joven de la perla - Johannes Vermeer.

3. La Gioconda - Leonardo Da Vinci.

3. Conclusion #

The downsampling process to convert an image to grayscale using shaders can make the image appear to have lower resolution or be more pixelated because it involves reducing the number of pixels in the image.

When an image is downsampled, some of the original pixels are discarded or averaged to create a new, smaller image. In the case of grayscale conversion using

shaders, this downsampling is typically done by rendering the image to a smaller texture using a fragment shader that calculates the grayscale value of each pixel. The resulting texture may have a lower resolution than the original image, depending on the size of the output texture.

Because downsampling involves reducing the number of pixels in the image, some details may be lost or blurred in the downsampling process. For example, if the original image contains fine lines or intricate patterns, these details may become less distinct or disappear altogether in the downsampled version. This loss of detail can make the image appear to have lower resolution or be more pixelated.