Jump to content

Image texture: Difference between revisions

From Wikipedia, the free encyclopedia
Content deleted Content added
 
(9 intermediate revisions by 6 users not shown)
Line 1: Line 1:
{{about|computer graphics||Texture (disambiguation)}}
{{about|computer graphics||Texture (disambiguation)}}

[[Image:ArtificalTexture.png|thumb|right|alt=Artificial texture example.|Artificial texture example.]]
[[Image:ArtificalTexture.png|thumb|right|alt=Artificial texture example.|Artificial texture example.]]
[[Image:NaturalTexture.png|thumb|right|alt=Natural texture example.|Natural texture example.]]
[[Image:NaturalTexture.png|thumb|right|alt=Natural texture example.|Natural texture example.]]


An '''image texture''' is a set of metrics calculated in image processing designed to quantify the perceived texture of an image. Image texture gives us information about the spatial arrangement of color or intensities in an image or selected region of an image.<ref>[[Linda Shapiro|Linda G. Shapiro]] and George C. Stockman, ''Computer Vision'', Upper Saddle River: Prentice&ndash;Hall, 2001</ref>
An '''image texture''' is the small-scale structure perceived on an image, based on the spatial arrangement of color or intensities.<ref name=CV/>
It can be quantified by a set of metrics calculated in [[image processing]]. Image texture metrics give us information about the whole image or selected regions.<ref name=CV>[[Linda Shapiro|Linda G. Shapiro]] and George C. Stockman, ''Computer Vision'', Upper Saddle River: Prentice&ndash;Hall, 2001</ref>


Image textures can be artificially created or found in natural scenes captured in an image. Image textures are one way that can be used to help in [[Segmentation (image processing)|segmentation]] or classification of images. For more accurate segmentation the most useful features are spatial frequency and an average grey level.<ref>{{cite journal|title=Out-of-focus region segmentation of 2D surface images with the use of texture features.|author1=Trambitskiy K.V. |author2=Anding K. |author3=Polte G.A. |author4=Garten D. |author5=Musalimov V.M. |journal=Scientific and Technical Journal of Information Technologies, Mechanics and Optics|volume=15|issue=5|pages=796–802|year=2015|doi=10.17586/2226-1494-2015-15-5-796-802|doi-access=free}}</ref> To analyze an image texture in computer graphics, there are two ways to approach the issue: Structured Approach and Statistical Approach.
Image textures can be artificially created or found in natural scenes captured in an image. Image textures are one way that can be used to help in [[Segmentation (image processing)|segmentation]] or [[classification of images]]. For more accurate segmentation the most useful features are spatial frequency and an average grey level.<ref>{{cite journal|title=Out-of-focus region segmentation of 2D surface images with the use of texture features.|author1=Trambitskiy K.V. |author2=Anding K. |author3=Polte G.A. |author4=Garten D. |author5=Musalimov V.M. |journal=Scientific and Technical Journal of Information Technologies, Mechanics and Optics|volume=15|issue=5|pages=796–802|year=2015|doi=10.17586/2226-1494-2015-15-5-796-802|doi-access=free}}</ref> To analyze an image texture in computer graphics, there are two ways to approach the issue: Structured Approach and Statistical Approach.


==Structured Approach==
==Structured Approach==
Line 17: Line 17:


===Edge Detection===
===Edge Detection===
The use of edge detection is to determine the number of edge pixels in a specified region, helps determine a characteristic of texture complexity. After edges have been found the direction of the edges can also be applied as a characteristic of texture and can be useful in determining patterns in the texture. These directions can be represented as an average or in a histogram.
The use of [[edge detection]] is to determine the number of edge pixels in a specified region, helps determine a characteristic of texture complexity. After edges have been found the direction of the edges can also be applied as a characteristic of texture and can be useful in determining patterns in the texture. These directions can be represented as an average or in a histogram.


Consider a region with N pixels. the gradient-based edge detector is applied to this region by producing two outputs for each pixel p: the gradient magnitude Mag(p) and the gradient direction Dir(p). The edgeness{{not a typo}} per unit area can be defined by <math>F_{edgeness}=\frac{|\{p | Mag(p) > T\}|}{N}</math> for some threshold T.
Consider a region with N pixels. the gradient-based edge detector is applied to this region by producing two outputs for each pixel p: the gradient magnitude Mag(p) and the gradient direction Dir(p). The {{not a typo|edgeness}} per unit area can be defined by <math>F_{edgeness}=\frac{|\{p | Mag(p) > T\}|}{N}</math> for some threshold T.


To include orientation with edgeness{{not a typo}} histograms for both gradient magnitude and gradient direction can be used. H<sub>mag</sub>(R) denotes the normalized histogram of gradient magnitudes of region R, and H<sub>dir</sub>(R) denotes the normalized histogram of gradient orientations of region R. Both are normalized according to the size N<sub>R</sub> Then <math>F_{mag,dir}=(H_{mag}(R), H_{dir}(R))</math> is a quantitative texture description of region R.
To include orientation with {{not a typo|edgeness}} histograms for both gradient magnitude and gradient direction can be used. H<sub>mag</sub>(R) denotes the normalized histogram of gradient magnitudes of region R, and H<sub>dir</sub>(R) denotes the normalized histogram of gradient orientations of region R. Both are normalized according to the size N<sub>R</sub> Then <math>F_{mag,dir}=(H_{mag}(R), H_{dir}(R))</math> is a quantitative texture description of region R.


===Co-occurrence Matrices===
===Co-occurrence Matrices===
The [[co-occurrence matrix]] captures numerical features of a texture using spatial relations of similar gray tones.<ref>Robert M. Haralick, K. Shanmugam, and Its'hak Dinstein, "Textural Features for Image Classification", IEEE Transactions on Systems, Man, and Cybernetics, 1973, SMC-3 (6): 610–621</ref> Numerical features computed from the co-occurrence matrix can be used to represent, compare, and classify textures. The following are a subset of standard features derivable from a normalized co-occurrence matrix:
The [[co-occurrence matrix]] captures numerical features of a texture using spatial relations of similar gray tones.<ref>Robert M. Haralick, K. Shanmugam, and Its'hak Dinstein, "[http://haralick.org/journals/TexturalFeaturesHaralickShanmugamDinstein.pdf Textural Features for Image Classification]", IEEE Transactions on Systems, Man, and Cybernetics, 1973, SMC-3 (6): 610–621</ref> Numerical features computed from the co-occurrence matrix can be used to represent, compare, and classify textures. The following are a subset of standard features derivable from a normalized co-occurrence matrix:


<math>
<math>
Line 37: Line 37:
where <math>p[i,j]</math> is the <math>[i,j]</math>th entry in a gray-tone spatial dependence matrix, and Ng is the number of distinct gray-levels in the quantized image.
where <math>p[i,j]</math> is the <math>[i,j]</math>th entry in a gray-tone spatial dependence matrix, and Ng is the number of distinct gray-levels in the quantized image.


One negative aspect of the co-occurrence matrix is that the extracted features do not necessarily correspond to visual perception. It is used in dentistry for the objective evaluation of lesions [DOI: 10.1155/2020/8831161], treatment efficacy [DOI: 10.3390/ma13163614; DOI: 10.11607/jomi.5686] and bone reconstruction during healing [DOI: 10.5114/aoms.2013.33557; DOI: 10.1259/dmfr/22185098; EID: 2-s2.0-81455161223].
One negative aspect of the co-occurrence matrix is that the extracted features do not necessarily correspond to visual perception. It is used in dentistry for the objective evaluation of lesions [DOI: 10.1155/2020/8831161], treatment efficacy [DOI: 10.3390/ma13163614; DOI: 10.11607/jomi.5686; DOI: 10.3390/ma13173854; DOI: 10.3390/ma13132935] and bone reconstruction during healing [DOI: 10.5114/aoms.2013.33557; DOI: 10.1259/dmfr/22185098; EID: 2-s2.0-81455161223; DOI: 10.3390/ma13163649].


===Laws Texture Energy Measures===
===Laws Texture Energy Measures===
Another approach is to use local masks to detect various types of texture features. Laws<ref>K. Laws "Textured Image Segmentation", Ph.D. Dissertation, University of Southern California, January 1980</ref> originally used four vectors representing texture features to create sixteen 2D masks from the outer products of the pairs of vectors. The four vectors and relevant features were as follows:
Another approach is to use local masks to detect various types of texture features. Laws<ref>K. Laws "[https://apps.dtic.mil/sti/pdfs/ADA083283.pdf Textured Image Segmentation]", Ph.D. Dissertation, University of Southern California, January 1980</ref> originally used four vectors representing texture features to create sixteen 2D masks from the outer products of the pairs of vectors. The four vectors and relevant features were as follows:
<pre>
<pre>
L5 = [ +1 +4 6 +4 +1 ] (Level)
L5 = [ +1 +4 6 +4 +1 ] (Level)
Line 51: Line 51:
W5 = [ -1 +2 0 -2 +1 ] (Wave)
W5 = [ -1 +2 0 -2 +1 ] (Wave)
</pre>
</pre>
From Laws' 4 vectors, 16 5x5 "energy maps" are then filtered down to 9 in order to remove certain symmetric pairs. For instance, L5E5 measures vertical edge content and E5L5 measures horizontal edge content. The average of these two measures is the "edginess" of the content. The resulting 9 maps used by Laws are as follows:<ref>{{Cite book|url=http://courses.cs.washington.edu/courses/cse576/book/ch7.pdf|title=CSE576: Computer Vision: Chapter 7|last=|first=|publisher=University of Washington|year=2000|isbn=|location=|pages=9–10|quote=|via=}}</ref>
From Laws' 4 vectors, 16 5x5 "energy maps" are then filtered down to 9 in order to remove certain symmetric pairs. For instance, L5E5 measures vertical edge content and E5L5 measures horizontal edge content. The average of these two measures is the "edginess" of the content. The resulting 9 maps used by Laws are as follows:<ref>{{Cite book|url=http://courses.cs.washington.edu/courses/cse576/book/ch7.pdf|title=CSE576: Computer Vision: Chapter 7|publisher=University of Washington|year=2000|pages=9–10}}</ref>
<pre>
<pre>
L5E5/E5L5
L5E5/E5L5

Latest revision as of 02:48, 25 March 2024

Artificial texture example.
Artificial texture example.
Natural texture example.
Natural texture example.

An image texture is the small-scale structure perceived on an image, based on the spatial arrangement of color or intensities.[1] It can be quantified by a set of metrics calculated in image processing. Image texture metrics give us information about the whole image or selected regions.[1]

Image textures can be artificially created or found in natural scenes captured in an image. Image textures are one way that can be used to help in segmentation or classification of images. For more accurate segmentation the most useful features are spatial frequency and an average grey level.[2] To analyze an image texture in computer graphics, there are two ways to approach the issue: Structured Approach and Statistical Approach.

Structured Approach

[edit]

A structured approach sees an image texture as a set of primitive texels in some regular or repeated pattern. This works well when analyzing artificial textures.

To obtain a structured description a characterization of the spatial relationship of the texels is gathered by using Voronoi tessellation of the texels.

Statistical Approach

[edit]

A statistical approach sees an image texture as a quantitative measure of the arrangement of intensities in a region. In general this approach is easier to compute and is more widely used, since natural textures are made of patterns of irregular subelements.

Edge Detection

[edit]

The use of edge detection is to determine the number of edge pixels in a specified region, helps determine a characteristic of texture complexity. After edges have been found the direction of the edges can also be applied as a characteristic of texture and can be useful in determining patterns in the texture. These directions can be represented as an average or in a histogram.

Consider a region with N pixels. the gradient-based edge detector is applied to this region by producing two outputs for each pixel p: the gradient magnitude Mag(p) and the gradient direction Dir(p). The edgeness per unit area can be defined by for some threshold T.

To include orientation with edgeness histograms for both gradient magnitude and gradient direction can be used. Hmag(R) denotes the normalized histogram of gradient magnitudes of region R, and Hdir(R) denotes the normalized histogram of gradient orientations of region R. Both are normalized according to the size NR Then is a quantitative texture description of region R.

Co-occurrence Matrices

[edit]

The co-occurrence matrix captures numerical features of a texture using spatial relations of similar gray tones.[3] Numerical features computed from the co-occurrence matrix can be used to represent, compare, and classify textures. The following are a subset of standard features derivable from a normalized co-occurrence matrix:

where is the th entry in a gray-tone spatial dependence matrix, and Ng is the number of distinct gray-levels in the quantized image.

One negative aspect of the co-occurrence matrix is that the extracted features do not necessarily correspond to visual perception. It is used in dentistry for the objective evaluation of lesions [DOI: 10.1155/2020/8831161], treatment efficacy [DOI: 10.3390/ma13163614; DOI: 10.11607/jomi.5686; DOI: 10.3390/ma13173854; DOI: 10.3390/ma13132935] and bone reconstruction during healing [DOI: 10.5114/aoms.2013.33557; DOI: 10.1259/dmfr/22185098; EID: 2-s2.0-81455161223; DOI: 10.3390/ma13163649].

Laws Texture Energy Measures

[edit]

Another approach is to use local masks to detect various types of texture features. Laws[4] originally used four vectors representing texture features to create sixteen 2D masks from the outer products of the pairs of vectors. The four vectors and relevant features were as follows:

 L5  =  [ +1  +4  6  +4  +1 ]  (Level)
 E5  =  [ -1  -2  0  +2  +1 ]  (Edge)
 S5  =  [ -1   0  2   0  -1 ]  (Spot)
 R5  =  [ +1  -4  6  -4  +1 ]  (Ripple)

To these 4, a fifth is sometimes added:[5]

 W5  =  [ -1  +2  0  -2  +1 ]  (Wave)

From Laws' 4 vectors, 16 5x5 "energy maps" are then filtered down to 9 in order to remove certain symmetric pairs. For instance, L5E5 measures vertical edge content and E5L5 measures horizontal edge content. The average of these two measures is the "edginess" of the content. The resulting 9 maps used by Laws are as follows:[6]

L5E5/E5L5
L5R5/R5L5
E5S5/S5E5
S5S5
R5R5
L5S5/S5L5
E5E5
E5R5/R5E5
S5R5/R5S5

Running each of these nine maps over an image to create a new image of the value of the origin ([2,2]) results in 9 "energy maps," or conceptually an image with each pixel associated with a vector of 9 texture attributes.

Autocorrelation and Power Spectrum

[edit]

The autocorrelation function of an image can be used to detect repetitive patterns of textures.

Texture Segmentation

[edit]

The use of image texture can be used as a description for regions into segments. There are two main types of segmentation based on image texture, region based and boundary based. Though image texture is not a perfect measure for segmentation it is used along with other measures, such as color, that helps solve segmenting in image.

Region Based

[edit]

Attempts to group or cluster pixels based on texture properties.

Boundary Based

[edit]

Attempts to group or cluster pixels based on edges between pixels that come from different texture properties.

See also

[edit]

Further reading

[edit]

Peter Howarth, Stefan Rüger, "Evaluation of texture features for content-based image retrieval", Proceedings of the International Conference on Image and Video Retrieval, Springer-Verlag, 2004

A detailed description of texture analysis in biomedical images can be found in Depeursinge et al. (2017).[7] Texture analysis is used to examine radiological images in oral surgery [DOI: 10.3390/ma13132935; DOI: 10.3390/ma13163649] and periodontology [DOI: 10.3390/ma13163614; DOI: 10.17219/acem/104524].

References

[edit]
  1. ^ a b Linda G. Shapiro and George C. Stockman, Computer Vision, Upper Saddle River: Prentice–Hall, 2001
  2. ^ Trambitskiy K.V.; Anding K.; Polte G.A.; Garten D.; Musalimov V.M. (2015). "Out-of-focus region segmentation of 2D surface images with the use of texture features". Scientific and Technical Journal of Information Technologies, Mechanics and Optics. 15 (5): 796–802. doi:10.17586/2226-1494-2015-15-5-796-802.
  3. ^ Robert M. Haralick, K. Shanmugam, and Its'hak Dinstein, "Textural Features for Image Classification", IEEE Transactions on Systems, Man, and Cybernetics, 1973, SMC-3 (6): 610–621
  4. ^ K. Laws "Textured Image Segmentation", Ph.D. Dissertation, University of Southern California, January 1980
  5. ^ A. Meyer-Bäse, "Pattern Recognition for Medical Imaging", Academic Press, 2004.
  6. ^ CSE576: Computer Vision: Chapter 7 (PDF). University of Washington. 2000. pp. 9–10.
  7. ^ Depeursinge, A.; Al-Kadi, Omar S.; Mitchell, J. Ross (2017-10-01). Biomedical Texture Analysis: Fundamentals, Tools and Challenges. Elsevier. ISBN 9780128121337.