|
|
 Originally Posted by MadMojoMonkey
The facial recognition software isn't categorizing things like a human would. It's examining the color values of groups of pixels using various filters and algorithms. It's not comparing nose size and distance between the eyes, it's comparing the colors and patterns of pixels, which is a subtle difference that bears stating. It doesn't know what or whom is gay until it is fed a sample set of pictures and told which is which to begin with. Once that process is seeded, you can start to ask it which of those 2 piles it would sort a new picture into. Often it's not too good at first, so you seed it more and more until it starts to show reliable results.
Not sure if I'm misreading you, but you can definitely teach it what an eye, a mouth, a nose... looks like and then let it tell you the relative distance. Smartphones and webcams can already do that. Facebook does it (or did it) to auto-tag people in pictures.
I wouldn't mind reading the paper if it's available anywhere, but unless this is verified by independent researches I'm not giving it too much credibility.
|