sábado, 29 de julio de 2017

Module 5 -- Assignments



Module 5
Assignment
1. Tattoos
Use Seamless Cloning to apply this Tattoo to a face or an arm. Would you use
NORMAL_CLONE or MIXED_CLONE ?
You can make an interactive interface using Highgui that implements
the following steps
1. Display the input image of the face or the arm.
2. Specify a rectangle: Create an interface that allows you to
specify a rectangular region by clicking and dragging the
mouse cursor over the image. To implement this you will need
to work with mouse events in Highgui.
3. Clone butterfly: Clone the butterfly inside the rectangular
Region. You will need to resize the butterfly image based on the
Size of the rectangle.
Here is the link to the butterfly image


I started with the code of assignment 1-2, I made the alpha Blend on paste.


This image shows the result.


The assignment asks for Seamless Cloning. I used both flags NORMAL_CLONE and MIXED_CLONE, and this way I can compare the results



Here is an image with the flag NORMAL_CLONE, you can see the blurred region around the butterfly tattoo.

Next is the summary of the results:




The MIXED_CLONE, sometimes produce some artifacts.
The alpha-blend is more precise, but it needs an alpha channel, I made the alpha channel from the image to be pasted but something strange happens sometimes, look at the Hello Kitty on the Hillary’s face the color is faded.
2. Pinocchio’s Nose [ Advanced ]
Use the MLS + Head Pose Estimation to create an elongated Pinocchio’s nose. It should work
Properly for images and videos of faces. Here is an example output .
Note : We have not tried it ourselves, but this could be fun.



I got a small Pinocchio’s nose, on several images, using MLS, the face is deformed with the nose. Right now I am not able to think how to use the MLS together with the Head Pose Estimation to get a bigger nose. keep thinking.

Modulo 5



Module 5.1
Seamless Cloning
Table of Contents
Introduction
Poisson Image Editing
Cloning in 1D
Cloning in 2D
Seamless Cloning Example
References and Further Reading

En este modulo,  se presenta una nueva función llamada “Seamless Cloning” que se traduciría como “Clonado sin costuras”,  es “Seamless Cloning, reemplazamos los componentes x e y del gradiente de la imagen de destino con los componentes x e y del gradiente de la imagen fuente. Las condiciones de contorno son satisfechas en el contorno, allí los valores de los pixeles deberían ser los mismos que en la imagen de destino.
La función seamlessClone(), tiene los siguientes parámetros en C++.
seamlessClone (Mat src, Mat dst, Mat mask, Point center, Mat output, int flags)
La función acepta dos “flags”, “NORMAL_CLONE” y “MIXED_CLONE”.

Module 5.2
SnapChat Filters FaceSwap
Table of Contents
Overview
FaceSwap
In this lecture, we will learn how to perform Face swapping i.e. swap out a face in one image
with a completely different face. We will be using concepts from week 3 and week 4 in this
module. Please complete them before starting this module.
Why is Face-Swap difficult ?
Figure 2 : Original Image of Presidential Candidates
FaceSwap : Step by Step
1. Face Alignment
Facial Landmark Detection
2. Find Convex Hull
3. Delaunay Triangulation
4. Affine warp triangles
5. Blending of images
5.1 Blending for Image based FaceSwap - Seamless Cloning
5.2 Blending for Video based FaceSwap - Color Correction and Alpha Blending
Code and Tutorial for Image based Face swap
Code and Tutorial for Video based Face swap
References and Further Reading
En este modulo,  aprendemos una técnica para cambiar una cara por otra diferente. Para lograr un resultado capaz de burlar en muchas ocasiones al cerebro, estos son los pasos:
1.- Alineamiento de las caras usando los puntos de referencia obtenidos con Dlib. Usamos todos los puntos del contorno,  y los doce puntos alrededor de la boca.
2.- Hallamos la envolvente convexa de los puntos.
3.- Hallamos la triangulación Delaunay de los puntos de la envolvente convexa.
4.- Hacemos la triangulación afin de los triángulos.
5.- Mezclado de imágenes, para hacer el resultado natural, dos métodos.
5.1.- Para imágenes, usamos “seamless cloning”.
5.2.- Para videos, primero aplicamos la corrección del color, usamos “alpha blending” utilizando como mascara la envolvente convexa que la hemos suavizado con Gauss.

Module 5.3
SnapChat Filters
Beardify
Table of Contents
Beard Filter
The Core Idea
Code and Tutorial for Beardify
En este modulo,  se desarrollan métodos para aplicar una barba postiza sobre una cara, se explican dos métodos, uno para imagen estática y otro para video en tiempo real. Ambos métodos son básicamente iguales basados en una imagen de una barba con mascara alpha y donde se han ubicado los puntos de referencia que nos interesan para hacerla corresponder con la cara donde queremos pegarla.

Module 5.4
SnapChat Filters
Aging
Table of Contents
Aging Filter
The Core Idea
How to Estimate Forehead Points
How to Generate a Mask from Points
What is a Convex hull?
Aging Filter Code and Tutorial

En este modulo,  se desarrolla la idea de un filtro para envejecer la apariencia de las personas. Las cosas que diferencian a un anciano de un joven son:
1.- Las arrugas.
2.- Manchas de color oscuro marrones y negras.
3.- Una apariencia  más pálida.
4.- La piel pierde elasticidad, cuelga y no vuelve a subir.
Para envejecer la foto de una persona, nos vamos a ayudar de una fotografía de frente de una persona vieja y vamos a realizar un “seamless cloning” con la opción MIXED_CLONE”
Para la zona de la frente Dlib no nos proporciona directamente ningún punto, pero podemos extrapolar varios puntos en la frente a partir de los puntos de referencia obtenidos con Dlib.
La máscara la obtenemos a partir del conjunto de puntos de la envolvente convexa.


Module 5.5
SnapChat Filters
Moving Least Squares
Table of Contents
Moving Least Squares 3
Properties of MLS 4
How do you choose control points in MLS? 4
MLS based SnapChat Filters 6
Fatify Filter 6
Fatify Code and Tutorial 7
Happify Filter 14
Happify Code and Tutorial 15
References and Further Reading 21

En este modulo,  se implementa un programa que hace posible deformar una imagen mediante MLS (Moving Least Square), la cual es una técnica que permite la deformación a partir del movimiento de unos pocos puntos de control. La explicación matemática de porque este método produce una distorsión suave y muy veloz, está en la ponencia que se referencia como lecturas futuras  http://faculty.cs.tamu.edu/schaefer/research/mls.pdf.
Se proponen dos ejemplos con el uso de MLS:
El filtro Fatify que deforma la mandíbula de una persona desplazando cada punto de la mandíbula original  en una dirección radial con centro en la punta de la nariz.
El filtro Happyfy deforma la expresión de los ojos dejando  anclado un punto en medio de la nariz y los puntos de deformación en los extremos exteriores de los ojos y los extremos interiores de las cejas. También deforma la expresión de la boca con un punto fijo en el centro de la barbilla y los extremos y centro de la boca.

la proxima semana espero traer algo mas.







domingo, 23 de julio de 2017

Module 4 - Assignments




Assigment 4

eyefish Xformed01  K 1.3 (output image, corrected distortion)(before filter)
 
Assigment 4.1
1. Passport Photo
Use a photo of a person against a white background and generate a US ( or your own
country ) passport photo using the normalization and standardization techniques
introduced in this module.
You can find the detailed requirements here
https://travel.state.gov/content/passports/en/passports/photos.html
Things to bear in mind
a. What should be be the resolution in pixels for a 2 inch x 2 inch photo if for a
reasonable photo quality print you need 300 DPI. What is DPI ? Google it.
b. In the specification, it says head must be “between 1 -1 3/8 inches (25 - 35 mm)
from the bottom of the chin to the top of the head.” We don’t have this information
using Dlib’s landmarks. Can we approximate it by normalizing based on the
distance between the eyes instead? How would you test if your approximation is
correct?
c. If you want to get fancy, you can check if it the input photo is a good enough
photo for creating a passport photo. This is not an easy task, and all the checks
below are not covered in the course, but you can check
i. If face is detected.
ii. Resolution of the cropped face.
iii. Person is looking straight at the camera.
iv. [ Advanced users] : If the image is sufficiently well lit . The problem sounds
easier than it is. The skin pixels are bad for estimating lighting. Can you
use the background? How about the whites of the eyes?
v. [ Advanced users ] : Check if the image is blurry.
vi. [ Advanced users } : Check for noise levels.

Answer to assignment 4.1


a.-We say than 300 ppi == 300 dpi. For a 2 x 2 inches photo and 300 dpi, the resolution will be 600 x 600 dots or 600 x 600 pixels on the screen.
b.1.- The specification says “head must be between 1 and 1 3/8 inches means with 600 x 600 resolution, head must be between 300 and 412 dots.
b.2.- The specification says “Eye line must be from 1 1/8 to 1 3/8 inches from the bottom of the photo, that means from 337 to 412 pixels from the bottom. (as we have our origen in the corner upper left the y-coordinate between 188 and 263. We can approximate a head saying that a 8 eyes head is 400 dots, one eye will be 50 dots in our photo, and the minimum head is 300/50 = 6 eyes. Normally a head is between 6 and 7 eyes height,  this way we have a eye room on the upper side.
b.3.- Usually the line of the eyes divide the face in two halves. We can test with Dlib that the distance between the line of the eyes to the tip of the chin must be lower than 412/2=206 dots.
c.- Checking list
c.1.- Is face detected? Use Dlib to detect face.
c.2.- Resolution of the cropped face? 5 times eye x 2 times  distance in b.3
c.3.- Person is looking straight at the camera? Find position of points 68 and 69 in landmark of 70 points.
c.4.- If the image is sufficiently well lit. histogram of  the white of the eyes.
c.5.- Check if the image is blurry.
c.6.- Check for noise levels.
I did a program that compute the 70 landmarks for a photo, the program crop the photo according to the regulation for US passports, the program add some white strip to center the photo as asked.
The program check if the person in the photo is looking stright to the camera or not.
Next are some images with this results, I used several image sizes, but the output image on the right is always 600 x 600 pixels :

looking straight to the camera value = 0.013228 

looking straight to the camera value = 0.00423745

looking straight to the camera value = 0.0325 greater than my empiric limit of 0.02

looking straight to the camera value = 0.000971146


 looking straight to the camera value = 0.00414865


Assigment 4.2
Blink detection


a. Use a different measure for finding the status of the eye.
b. Use a different method for normalizing the eye area and check its robustness.

I computed several distances:
1.- The distance between landmark(38) and landmark(40) (between eyelids of the right eye)
2.- The distance between landmark(44) and landmark(46) (between eyelids of the left eye)
3.- The distance between landmark(36) and landmark(45) (between the outer corners of the eyes), that happen to be aproximately 3 times the distance between the corners of one eye.
float factor=3.0;
normalizedCount = (float)factor*(lenEyelidsLeft + lenEyelidsRight) / lenOutEyesCorners;

Explanation: 

In the Module 4.6 Blink and Drowsiness Detection  is calculated the area of the eyes and divided by  the squared length of the eye to normalize that measure.

The area of the eye is aproximately half of the area of the bounding box of one eye. We added the two distances of the eyelids left and right and multiply imaginarily by the length of the eye.
In the denominator we have the length outer of the eyes multiply imaginarily by the length of the eye.
we simplify in our head the same factor in numerator and denominator. 
The expression is three times lower because the outer distance of the eyes is three times one eye, this explains the factor=3.0.

It works and I don't need to tweak other parameters. 



Assigment 4.3

3. Funny faces


Take an image having fisheye distortion and try to correct it by inverting the distortion.
You can choose the parameters manually using sliders.




Image fisheye_donald.jpg

My answer:
I started with the equations:
I got interesting outputs:


grid default values

grid pincushion distortion parametro .0000036


grid barrel distortion parametro -0000018

eyefish parametro 0000061b

That was near inside the red rectangle, but, it does not solve the problem.

Next, I tried the trigonometric function used in module 4.7 and that did the trick with a k=1.3
// Pincushion distortion function
       float rn = std::min((double)r, r + (pow(r, k) - r) * cos(M_PI * r));



eyefish Xformed01  K 1 default parameter (input image)

eyefish Xformed01  K 1.3 (output image, corrected distortion)

There are some elliptic artifacts because the transformation is not biyective and I can’t clear easily the transformation equation for xu and yu. Lets try to improve it.

I know that the inverse transformation must be continue, it is represented by the arrays that I created: IXu(y,x) and IYu(y,x), so I filter these arrays with a medianBlur() to fill the gap. The output images are below:



eyefish Xformed01  K 1.0 default parameter + MedianBlur (input image without distorsion corrected)



eyefish Xformed01  K 1.3 MedianBlur (elliptic artifacts disapear) (output image with distorsion corrected)


quadCircle Xformed01  K 1.0 default value MedianBlur (this image has the same distorsion as eyefish donald_trump) (input image without distorsion corrected)



quadCircle Xformed01  K 1.3 MedianBlur (output image with distorsion corrected)