SlideShare a Scribd company logo
1 of 17
Download to read offline
Jan Zizka et al. (Eds) : ACSTY, NATP - 2016
pp. 15– 31, 2016. © CS & IT-CSCP 2016 DOI : 10.5121/csit.2016.61402
RECOGNITION OF RECAPTURED IMAGES
USING PHYSICAL BASED FEATURES
S. A. A. H. Samaraweera1
and B. Mayurathan2
1
Department of Computer Science, University of Jaffna, Sri Lanka
anuash119@gmail.com
2
Department of Computer Science, University of Jaffna, Sri Lanka
barathy@jfn.ac.lk
ABSTRACT
With the development of multimedia technology and digital devices, it is very simple and easier
to recapture a high quality images from LCD screens. In authentication, the use of such
recaptured images can be very dangerous. So, it is very important to recognize the recaptured
images in order to increase authenticity. Image recapture detection (IRD) is to distinguish real-
scene images from the recaptured ones. An image recapture detection method based on set of
physical based features is proposed in this paper, which uses combination of low-level features
including texture, HSV colour and blurriness. Twenty six dimensions of features are extracted to
train a support vector machine classifier with linear kernel. The experimental results show that
the proposed method is efficient with good recognition rate of distinguishing real scene images
from the recaptured ones. The proposed method also possesses low dimensional features
compared to the state-of-the-art recaptured methods.
KEYWORDS
Image Recapture Detection, Texture, HSV, Blurriness & Support Vector Machine
1. INTRODUCTION
Since the last century, the information technology is increasing rapidly. The digital documents are
replacing paper documents. However, a photograph implies truthfulness. This technology enables
digital documents to be easily modified and converted which makes our life easier in digital
matters. Unlike a text, an image accomplishes an effective communication channel for humans.
Hence, maintenance of the trustfulness of a digital image is a major challenge in today’s world.
Recaptured images means, it is different from the common photographs in that what being
captured is an image reproduction surface instead of a general scene. Image recapture detection
technique distinguishes real images from recaptured images. i.e.) images from media that displays
real-scene images such as printed pictures or LCD display. Difficulties of recognizing recaptured
images can be described using Figure 1. Here (a) and (b) are real images, (c) and (d) are
recaptured images. It is extremely complicated task for an artificial system to recognize
recaptured images from real ones.
16 Computer Science & Information Technology (CS & IT)
In recent years, considerable amount of researches are conducting for image recaptured detection
to restore the trustworthiness of digital images [1], [2], [3]. Using the image recapturing process it
is possible to restore the intrinsic image regularities automatically and to remove some common
tampering anomalies automatically. An important task for the current image forensic system is
the recognition of the recaptured images. Apart from that, an image forensic system can detect
rebroadcast attacks on a biometric identification system. Therefore we study the problem of
recaptured image detection as an application in image forensics.
Figure 1. Difficulties of recognizing recaptured images
In other hand, face authentication systems are designed with aliveness detection for verifying a
live face on mobile devices such as laptop computers and smart phones. For such systems faked
identity through recapturing of a faked print face photo has become a big issue.
In robot vision, differentiating the objects on a poster from the real ones is more intelligence. IRD
is also useful for that purpose. Another important application for IRD is in composite image
detection. One way to cover composition in composite image is to recapture it.
The process of producing the real scene images and the corresponding recaptured images are
shown in Figure 2. As shown in Figure 2. (a) the real image can be obtained through any type of
camera. For the reproduction process, initially the real image is captured by the any type of
camera. Then it is reproduced using different types of printing or display media such as printed
on an office A4 paper using a colour laser printer or displayed on a LCD screen of a PC etc.
Finally, the recaptured image is obtained through the camera.
Displaying or printing a scene on any type of physical media, lead to poor quality of recaptured
image. We can easily identify some artefacts like texture pattern, colour fading etc. As shown in
Figure 3.the low-quality recaptured images can be easily identified by the human eyes.
Computer Science & Information Technology (CS & IT) 17
Figure 2. The process of producing real image and recaptured image
Figure 3. Comparison of a real image (a) and a recaptured image (b)
For instance, consider the displaying on LCD screen as the reproduction process as illustrated in
Figure 4. Cao and Kot [5] compared real images and corresponding recaptured images with a
large number of controllable settings including camera settings, LCD settings and environmental
settings. They concluded that visual quality of these finely recaptured images is significantly
better than the casually recaptured images. So, this is a big opportunity for forgers to recapture
the artificially generated scenery and use the recaptured image to fool image forensic system.
Recently, a Vietnamese security group found that most commercial laptop computers with face
authentication system can be easily attacked by just presenting a human face printed on an A4-
size paper [6].
18 Computer Science & Information Technology (CS & IT)
Figure 4. Some controllable settings for reproduction process on a LCD screen
2. LITERATURE REVIEW
This section includes several approaches which are used to identify the recaptured images from
real scene images as well as the studies which are related with distinguishing real scene images
and recaptured images on the printing paper and LCD screens respectively.
Xinting Gao et al., [1] introduced a physics-based approach for recaptured image detection. The
set of physics-based features is composed of the contextual background information, the spatial
distribution of specularity that is related to the surface geometry, the image gradient that captures
the non-linearity in the recaptured image rendering process, the colour information and contrast
that is related to quality of reproduction rendering, and a blurriness measure that is related to the
recapturing process. These features were used to classify the recaptured images from the real
ones. This achieved significantly better classification performance on the low resolution images
as compared to the wavelet statistical features.
Ke et al., [2] proposed an image recapture detection method based on multiple feature descriptors.
It uses combinations of low dimensional features including texture feature, noise feature,
difference of histogram feature and colour feature. The experimental result has demonstrated that
this method is efficient with good detection rate of distinguishing real scene images from the
recaptured ones. It possesses low time complexity.
Hany Faridy and Siwei Lyu [4] presented a statistical model with first and higher order statistics
which capture certain statistical regularities of natural images.
Hang Yu et al., [3] brought up a cascaded dichromatic model with the high frequency spatial
variations in the specular component of a recaptured image. This distinctive feature is a result of
the micro-structure on the printing paper. With a probabilistic support vector machine classifier,
Cao and Kot [5] classified recaptured images on LCD screens from natural images. They perform
the experiment using three types of features including texture feature using Local Binary Pattern,
multi-scale wavelet statistics and colour feature.
Computer Science & Information Technology (CS & IT) 19
3. DATASET
Bai et al., [7] found that the image resolution affects the performance of the algorithms. So,
XintingGao et al., [8] presented smart phone recapture image database taken by smart phone
cameras. Even though there are some publically available databases, I used this database due to
the general resolution of the images is set to VGA (640 x 480). This database has constructed
using following criteria.
• The images are in pair for the real image and the recaptured one taken by the same end-
user camera.
• The images are consisting of outdoor natural scene, indoor office or residence scene and
close-up or distant scene
3.1. Real Image Dataset
The real images are obtained by any type of camera as shown in Figure 1 (a). The images in the
real image dataset have produced using three popular brands of smart phones including Acer
M900, Nokia N95 and HP iPAQ hw6960. These camera phones are set to auto mode whenever
possible. All these three types of phones have back-facing camera. Totally I used 1094 images as
real images. Table 1 lists total number of images taken from different brands of camera.
Table 1. The number of real images.
Types Images
Acer B 407
HP B 369
Nokia B 318
Total 1094
3.2. Recaptured Image Dataset
As illustrated in Figure 1 (b), the reproduction process is pure image-based. The images in the
recaptured image dataset have produced using three types of DSLR (digital single-lens reflex)
cameras including Nikon D90, Canon EOS 450D and Olympus E-520. These cameras are set to
auto mode whenever possible and the resulting images have saved in JPEG format. The DSLR
cameras have high resolution (greater than 3000x2000 pixels) and high quality. In constructing
the recaptured dataset it has used two types of reproduction processes such as printing on a paper
and displaying on a screen. The images are printed on an A4-size office paper using HP
CP3505dn laser printer and Xerox Phaser 8400 ink printer. They have printed into 4R glossy and
matte photos too. On the other hand, for LCD screen display they have used Dell 2007FP LCD
screen (1600 x 1200 pixels). Finally the reproduced image has recaptured by the above mentioned
camera phones. Table 2 lists the number of recaptured images in each reproduction process.
Totally I used 1137 images as recaptured images.
20 Computer Science & Information Technology (CS & IT)
Table 2. The number of recaptured images.
4. METHODOLOGY
In this paper, I propose an image recaptured detection method based on physical based features.
A working diagram of my proposed method is illustrated in Figure 5. The images in the real
image dataset and the recaptured image dataset are used for the Feature Extraction step. For each
image, features including Texture, HSV colour and Blurriness are extracted. Then to train the
SVM classifier, both features and labels are used. This is the training procedure in my method. In
the testing procedure, the features in the testing image are extracted. Then the SVM classifier
classifies those features as the features of either a real image or a recaptured image.
4.1. Feature Extraction
In general, the recaptured images and corresponding real images will never be same due to the
direction of the light, distance between the camera and the scenery, sensor resolution, the lens
quality and so forth. By considering this problem as a binary classification task, I introduce
following three types of features including Texture, HSV colour and Blurriness to differentiate
the recaptured images from real images.
Computer Science & Information Technology (CS & IT) 21
Figure 5. Diagram for the proposed image recaptured detection method
4.1.1. Texture feature
Figure 6. LBP and CS-LBP features for a neighbourhood of 8 pixels
22 Computer Science & Information Technology (CS & IT)
In computer vision applications, Texture plays an important role. During the past decades, so
many numbers of algorithms have been presented for texture feature extraction. They can be
mainly divided into two approaches: Statistical approaches and Structural approaches. Among
them most commonly used algorithms are Gabor filters, Wavelet transform and so forth.
Currently the local binary pattern (LBP) has received a considerable attention in many
applications as a Statistical approach [9]. Due to the high dimensionality of the LBP operator,
now new experiments are carrying on with the centre-symmetric local binary pattern (CS-LBP)
which is an extension of LBP operator. Not only dimensionality reduction, the CS-LBP captures
better the gradient information than the basic LBP. Since the CS-LBP descriptor is
computationally simple, effective and robust for various image transformations, it is very
important to present a brief review of the CS-LBP.
CS-LBP operator [10] initially leads by the LBP operator. Histograms of the LBP operator are
long (256) and it is not robust in flat images. CS-LBP was proposed to reduce these drawbacks.
The LBP operator compares each pixel with the centre pixel. Instead of that, the CS-LBP operator
compares centre-symmetric pairs of pixels as illustrated in Figure 6. For the same number of
neighbours, it produces half number of comparisons. So that the LBP produces 256 (28
) different
binary patterns, whereas the CS-LBP produces only 16 (24
) different pattern for 8 neighbours. For
flat areas, the operator’s robustness can be increased using the gray level differences that are
threshold at a small value T. Thus, the CS-LBP operator is defined by Eq. (1).
4.1.2. Colour feature
In the reproducing stage, the reproduction devices introduce some tint into the reproduced
images. And also, the lighting can reduce the contrast and saturation of a recaptured image. So,
the colour feature of a recaptured image looks different from its original image as shown in
Figure 7.
Computer Science & Information Technology (CS & IT) 23
Figure 7. Comparison of the colour features introduced by the reproduction process
Colour model describes colours. Usually colour models represent a colour in the form of tuples
(generally of three). The purpose of a colour model is to facilitate the specification of colours in a
certain way and common standard. The RGB colour model is the most common colour model for
digital images. Because it retains compatibility with computer displays. However RGB has some
drawbacks. RGB is non-useful for objects specification and recognition of colours. It is difficult
to determine specific colour in RGB model. It reflects the use of CRTs, since it is hardware
oriented system. Apart from RGB the HSV colour model is commonly used in colour image
retrieval system, since HSV colours are defined easily by human perception not like RGB.
The HSV stands for the Hue, Saturation, and Value. The coordinate system is in a hexagon in
Figure 8. (a). And Figure 8. (b) shows a view of the HSV colour model. The Value represents
intensity of a colour, which is decoupled from the colour information in the represented image.
The hue and saturation components are intimately related to the way human eye perceives colour
resulting in image processing algorithms with physiological basis. As hue varies from 0 to 1.0,
the corresponding colours vary from red, through yellow, green, cyan, blue, and magenta, back to
red, so that there are actually red values both at 0 and 1.0. As saturation varies from 0 to 1.0, the
corresponding colours (hues) vary from unsaturated (shades of gray) to fully saturated (no white
component). As value, or brightness, varies from 0 to 1.0, the corresponding colours become
increasingly brighter.
Figure 8. (a) HSV Cartesian Coordinate System (b) HSV colour model
Colour histogram and colour moments are widely used to represent the colour information of an
image. Colour histogram is the approach more frequently adopted for Content Based Image
Retrieval Systems. It describes the frequency of colours in images. Even though it is a widely
used feature, it has some disadvantages associated with it. It is sensitive to noisy interferences.
24 Computer Science & Information Technology (CS & IT)
Small change in image might result in large change in histogram values and it is computationally
expensive.
Colour moments are measures that can be used to differentiate images based on their features of
colour. The assumption of the basis of colour moments is that the distribution of colour in an
image can be interpreted as a probability distribution. Probability distributions are characterized
by a number of unique moments. For example normal distributions are differentiated by their
mean and variance. Therefore it follows that if the colour in an image follows a certain
probability distribution, the moments of that distribution can then be used as features to identify
that image based on colour.
The mean, standard deviation and Skewness of an image are known as colour moments. In HSV
colour model, a colour is defined by 3 values: Hue, Saturation, and Value. Colour moments are
calculated for each of these channels in an image. An image therefore is characterized by 9
moments: 3 moments for each 3 colour channels. We will define the ith
colour channel at the jth
image pixel as pij. The three colour moments can be defined as:
• Moment 1- Mean:
Mean can be described as the average colour value in the image
• Moment 2- Standard Deviation:
The standard deviation is the square root of the variance of the distribution
• Moment 3- Skewness:
Skewness can be described as a measure of the degree of asymmetry in the distribution.
For example in HSV colour space, the variable i can take values from 1 to 3 (i.e. 1=H, 2=S, 3=V).
So, the resultant feature for the image contains 9 values in the form of 3x3 matrix of the
following format:
Where:
Computer Science & Information Technology (CS & IT) 25
E11,E12,E13 represents Mean value for HSV.
σ11,σ12,σ 13 represents Standard deviation value for HSV.
s11,s12,s13 represents Skewness value for HSV.
4.1.3. Blurriness
In a recaptured image, there are three key factors that blurriness can arise:
• The first capture device or the printing device could be of low resolution.
• The display medium may not be in the focus range of the camera due to specific
recaptured settings.
• If the end user camera has a limited depth of field, the distance background may be
blurring, while the entire display medium is in focus.
Figure 9. (a) and (b)
In this research work, I explore such information as a distinguishing feature in order to recognize
whether an image is a real scene or recaptured image. The new method based on the
discrimination between different levels of blur perceptible on the same image proposed by Crete
et al. [11] have been used to calculate a no-reference perceptual blur metric, ranging from 0 to 1
which are respectively the best and the worst quality in term of blur perception as shown in
Figure 9. (a) and (b) respectively.
Figure 10. Simplified flow chart of the blur estimation principle
As shown in Figure 10in the first step the intensity variations between neighbouring pixels of the
input image is computed. Then a low-pass filter is applied and computed the variations between
the neighbouring pixels. Then, the comparison between these intensity variations allows us to
26 Computer Science & Information Technology (CS & IT)
evaluate the blur annoyance. Thus, a high variation between the original and the blurred image
means that the original image is sharp whereas a slight variation between the original and the
blurred image means that the original image is already blurred.
4.2. Classification
Classification consists of predicting a certain outcome based on a given input. Among various
classification techniques, Support Vector Machine (SVM) is originally developed for solving
binary classification problems [12].
Figure 11. An example of a separable problem in a 2 dimensional space
Consider the Figure 11as an example of a linearly separable problem. Suppose we are given a set
of l training points of the form:
We thus try to find a classification boundary function f(x) = y that not only correctly classifies the
input patterns in the training data but also correctly classifies the unseen patterns. The
classification boundary f (x) = 0, is a hyperplane defined by its normal vector w, which basically
divides the input space into the class +1 vectors on one side and the class -1 vectors on other side.
Then there exists f (x) such that
Computer Science & Information Technology (CS & IT) 27
The optimal hyperplane is defined by maximizing the distance between the hyperplane and the
data points closest to the hyperplane (called support vectors). Then we need to maximize the
margin y = 2/|| w|| or minimize ||w|| subject to constraint (6). This is a quadratic programming
(QP) optimization problem that can be expressed as:
Usually, datasets are often not linearly separable in the input space. To deal with this situation
slack variables (ξi) are introduced into Eq. (8), where C is the parameter that determines the trade-
off between the maximization of the margin and minimization of the classification error. Now the
QP optimization problem is given by
The solution to the above optimization problem has the form:
where
Φ (.) is the mapping function that transforms the vectors in input space to feature space.
The dot product in Eq. (10) can be computed without explicitly mapping the points into feature
space by using a kernel function. Here the proposed method has used the linear kernel of the form
5. EXPERIMENTAL SETUP AND TESTING RESULTS
This section includes the evaluation of the proposed image recaptured detection method. In this
experiment, I used smart phone recapture image database proposed by Xinting Gao et al., [8]. The
real-scene images are obtained by three popular brands of smart phones including Acer M900,
Nokia N95 and HP iPAQ hw6960 which have back-facing camera. The recaptured images are
obtained by using three types of DSLR cameras including Nikon D90, Canon EOS 450D and
Olympus E-520.
For this experiment, totally 2231 images including 1094 real-scene images and 1137 recaptured
images are used. In this experimental setup, three different experimental setups are performed in
28 Computer Science & Information Technology (CS & IT)
order to demonstrate the performance of proposed method. First, the proposed method is
compared with several state-of-the-art methods. Second, the performances of different
combinations of features used in this paper are compared. Last, the performances of brands of
smart phones are compared.
5.1. Experiment I
The performance of proposed method is compared with the state-of-the-art methods. As
suggested by Ke et al., [2], the whole dataset is partitioned into training and testing images as
shown in Table 3 and measured the performance using accuracy.
Table 3. Training and testing image selection.
Twenty six dimensional low-level features including Texture, HSV colour and Blurriness are
extracted from the training images. Table 4 shows the recognition rate of proposed method using
different training and testing samples.
Table 4. Recognition rate as accuracy with different image samples on Smart Phone Recapture Image
Database [8].
Table 5 compares the performance of the proposed method with state-of-the-art methods.
According to the accuracy that is shown in Table 5, it can be seen that proposed method gives
similar performance compared to other methods.
Table 5. Comparison of feature dimension and performance achieved.by different methods on smart phone
recapture image database [8].
Computer Science & Information Technology (CS & IT) 29
5.2. Experiment II
Table 6. Dimensions and performance on the different combinations of features.
In order to find out the robust feature in recognition of recaptured images, I have computed the
performance of recaptured image detection method using several sets of image features such as
[texture], [HSV colour], [blurriness], [texture + HSV colour], [texture + blurriness] and [HSV
colour + blurriness]. The results are shown in Table 6.
It is observed that the CS-LBP operator which is used to extract the texture feature is the most
robust feature in recognition of recaptured images.
5.3. Experiment III
In order to find out the smart phone which has a good quality capturing process, the proposed
method is applied with different image samples; those are captured by three popular brands of
smart phones including Acer M900, Nokia N95 and HP iPAQ hw6960. Table 7 is concluded that
the proposed image recaptured detection method based on physical based features is more
effective for Acer M900 than Nokia N95 and HP iPAQ hw6960
Table 7. Performance on the brands of smart phones
In this experiment it is concluded that the proposed image recapture detection method has
achieved a comparable classification performance on low dimensional features including texture,
HSV colour and blurriness. Among them, the texture which is extracted using CS-LBP operator is
crucial for the recognition problem and also the proposed method is more effective for Acer
M900.
6. CONCLUSIONS
In this paper, I proposed an image recapture detection method based on set of physical based
features which uses combination of low-level features including texture, HSV colour and
blurriness. This proposed method is efficient with good recognition rate of distinguishing real-
30 Computer Science & Information Technology (CS & IT)
scene images from the recaptured ones. Even though the proposed method possesses low
dimensional features, it works excellently in both situations where in less training as well as more
training images.
There is a restriction in my research work. This limitation is that the dataset is consisting with
images taken only back-facing camera in three types of smart phones as I described in the section
Dataset. This will be an effect for the Experiment III to propose the overall performance of
brands of smart phones using the proposed method.
The future work is to use the most robust feature to train two dictionaries using the K-SVD
approach [13]. Using these two learned dictionaries, we would be able to determine whether a
given image has been recaptured. Another work is to extract more other features and measure the
performance to find the best combinations of all the features.
REFERENCES
[1] X. Gao, T.-T. Ng, B. Qiu&S.-F. Chang, (2010) “Single-view recaptured image detection based on
physics-based features”, IEEEInternational Conference on Multimedia and Expo (ICME), pp1469-74.
[2] Y. Ke, Q. Shan, F. Qin & W. Min, (2013) “Image recapture detection using multiple features”,
International Journal of Multimedia and Ubiquitous Engineering, Vol. 8, No. 5, pp71-82.
[3] H. Yu, T. –T. Ng&Q. Sun, (2008) “Recaptured Photo Detection Using Specularity Distribution”,
IEEEInternational Conference on Image Processing, pp3140-3143.
[4] H. Farid&S. Lyu (2003) “Higher-order wavelet statistics and their application to digital forensics”,
IEEE Workshop on Statistical Analysis in Computer Vision.
[5] H. Cao &A. C. Kot, (2010) “Identification of Recaptured Photographs on LCD Screens”,
IEEEInternational Conference on Acoustics, Speech and Signal Processing, pp1790-1793.
[6] D. Ngo, (2008) Vietnamese security firm:Your face is easy to fake, [Online], Available:
http://news.cnet.com/8301-17938105-10110987.html
[7] J. Bai, T.-T. Ng, X. Gao&Y. Q. Shi, (2010) “Is physics-based liveness detection truly possible with a
single image?”,IEEEInternational Symposium on Circuits and Systems (ISCAS).
[8] X. Gao, B. Qiu, J. Shen, T.-T. Ng, &Y. Q. Shi (2011) Digital Watermarking 9th International
Workshop, IWDW Revised Selected Papers, pp90-104.
[9] W. Xiaosheng &S. Junding, (2009) “An effective texture spectrum descriptor”,Fifth International
Conference on Information Assurance and Security.
[10] M. Heikkila&C. Schmid, (2009) “Description of interest regions with Local binary patterns”, Pattern
Recognit., Vol. 42, No. 3, pp425-436.
[11] F. Crete, T. Dolmiere, P. Ladret&M. Nicolas, (2007) “The blur effect: perception and estimation with
a new no-reference perceptual blur metric”, SPIEInternational Society for Optical Engineering.
[12] A. Ramanan, S. Suppharangsan &M. Niranjan, (2007) “Unbalanced Decision Tree for Multi-class
Classification”, IEEE International Conference on Industrial and Information Systems (ICIIS’07),
pp291-294.
Computer Science & Information Technology (CS & IT) 31
[13] T. Thongkamwitoon, H. Muammar & P. L. Dragotti (2014) “Robust Image Recapture Detection using
a K-SVD Learning Approach to train dictionaries of Edge Profiles”, IEEE International Conference
on Image Processing (ICIP), pp5317-5321.
AUTHORS
S. A. A. H. Samaraweera received the B.Sc. Special Degree in Computer Science
from University of Jaffna, Sri Lanka in 2016. Her current research interests include
image processing and digital image forensic.
B. Mayurathan received PhD Degree in Computer Science from University of
Peradeniya, Sri Lanka in 2014. Her current research interests include Computer vision
and Machine Learning.

More Related Content

What's hot

Digital image processing
Digital image processingDigital image processing
Digital image processingAvni Bindal
 
A Novel Approach For De-Noising CT Images
A Novel Approach For De-Noising CT ImagesA Novel Approach For De-Noising CT Images
A Novel Approach For De-Noising CT Imagesidescitation
 
Improving image resolution through the cra algorithm involved recycling proce...
Improving image resolution through the cra algorithm involved recycling proce...Improving image resolution through the cra algorithm involved recycling proce...
Improving image resolution through the cra algorithm involved recycling proce...csandit
 
Depth Estimation from Defocused Images: a Survey
Depth Estimation from Defocused Images: a SurveyDepth Estimation from Defocused Images: a Survey
Depth Estimation from Defocused Images: a SurveyIJAAS Team
 
Removing fence and recovering image details various techniques with performan...
Removing fence and recovering image details various techniques with performan...Removing fence and recovering image details various techniques with performan...
Removing fence and recovering image details various techniques with performan...RSIS International
 
Blind detection of image manipulation @ PoliMi
Blind detection of image manipulation @ PoliMiBlind detection of image manipulation @ PoliMi
Blind detection of image manipulation @ PoliMiGiorgio Sironi
 
Feature isolation and extraction of satellite images for remote sensing appli...
Feature isolation and extraction of satellite images for remote sensing appli...Feature isolation and extraction of satellite images for remote sensing appli...
Feature isolation and extraction of satellite images for remote sensing appli...IAEME Publication
 
Use of Illumination Invariant Feature Descriptor for Face Recognition
 Use of Illumination Invariant Feature Descriptor for Face Recognition Use of Illumination Invariant Feature Descriptor for Face Recognition
Use of Illumination Invariant Feature Descriptor for Face RecognitionIJCSIS Research Publications
 
Image processing sw & hw
Image processing sw & hwImage processing sw & hw
Image processing sw & hwamalalhait
 
IRJET - Contrast and Color Improvement based Haze Removal of Underwater Image...
IRJET - Contrast and Color Improvement based Haze Removal of Underwater Image...IRJET - Contrast and Color Improvement based Haze Removal of Underwater Image...
IRJET - Contrast and Color Improvement based Haze Removal of Underwater Image...IRJET Journal
 
Hand gesture recognition using support vector machine
Hand gesture recognition using support vector machineHand gesture recognition using support vector machine
Hand gesture recognition using support vector machinetheijes
 

What's hot (18)

Digital image processing
Digital image processingDigital image processing
Digital image processing
 
A Novel Approach For De-Noising CT Images
A Novel Approach For De-Noising CT ImagesA Novel Approach For De-Noising CT Images
A Novel Approach For De-Noising CT Images
 
Improving image resolution through the cra algorithm involved recycling proce...
Improving image resolution through the cra algorithm involved recycling proce...Improving image resolution through the cra algorithm involved recycling proce...
Improving image resolution through the cra algorithm involved recycling proce...
 
Depth Estimation from Defocused Images: a Survey
Depth Estimation from Defocused Images: a SurveyDepth Estimation from Defocused Images: a Survey
Depth Estimation from Defocused Images: a Survey
 
Removing fence and recovering image details various techniques with performan...
Removing fence and recovering image details various techniques with performan...Removing fence and recovering image details various techniques with performan...
Removing fence and recovering image details various techniques with performan...
 
E011122530
E011122530E011122530
E011122530
 
Image processing ppt
Image processing pptImage processing ppt
Image processing ppt
 
Digital Image Forgery
Digital Image ForgeryDigital Image Forgery
Digital Image Forgery
 
Blind detection of image manipulation @ PoliMi
Blind detection of image manipulation @ PoliMiBlind detection of image manipulation @ PoliMi
Blind detection of image manipulation @ PoliMi
 
Feature isolation and extraction of satellite images for remote sensing appli...
Feature isolation and extraction of satellite images for remote sensing appli...Feature isolation and extraction of satellite images for remote sensing appli...
Feature isolation and extraction of satellite images for remote sensing appli...
 
Use of Illumination Invariant Feature Descriptor for Face Recognition
 Use of Illumination Invariant Feature Descriptor for Face Recognition Use of Illumination Invariant Feature Descriptor for Face Recognition
Use of Illumination Invariant Feature Descriptor for Face Recognition
 
M.sc.iii sem digital image processing unit v
M.sc.iii sem digital image processing unit vM.sc.iii sem digital image processing unit v
M.sc.iii sem digital image processing unit v
 
20120140502012
2012014050201220120140502012
20120140502012
 
Image processing sw & hw
Image processing sw & hwImage processing sw & hw
Image processing sw & hw
 
IRJET - Contrast and Color Improvement based Haze Removal of Underwater Image...
IRJET - Contrast and Color Improvement based Haze Removal of Underwater Image...IRJET - Contrast and Color Improvement based Haze Removal of Underwater Image...
IRJET - Contrast and Color Improvement based Haze Removal of Underwater Image...
 
M.sc.iii sem digital image processing unit i
M.sc.iii sem digital image processing unit iM.sc.iii sem digital image processing unit i
M.sc.iii sem digital image processing unit i
 
Hand gesture recognition using support vector machine
Hand gesture recognition using support vector machineHand gesture recognition using support vector machine
Hand gesture recognition using support vector machine
 
Unit 1 a notes
Unit 1 a notesUnit 1 a notes
Unit 1 a notes
 

Viewers also liked

ALTERNATIVES TO BETWEENNESS CENTRALITY: A MEASURE OF CORRELATION COEFFICIENT
ALTERNATIVES TO BETWEENNESS CENTRALITY: A MEASURE OF CORRELATION COEFFICIENTALTERNATIVES TO BETWEENNESS CENTRALITY: A MEASURE OF CORRELATION COEFFICIENT
ALTERNATIVES TO BETWEENNESS CENTRALITY: A MEASURE OF CORRELATION COEFFICIENTcsandit
 
THE IMPACT OF EXISTING SOUTH AFRICAN ICT POLICIES AND REGULATORY LAWS ON CLOU...
THE IMPACT OF EXISTING SOUTH AFRICAN ICT POLICIES AND REGULATORY LAWS ON CLOU...THE IMPACT OF EXISTING SOUTH AFRICAN ICT POLICIES AND REGULATORY LAWS ON CLOU...
THE IMPACT OF EXISTING SOUTH AFRICAN ICT POLICIES AND REGULATORY LAWS ON CLOU...csandit
 
COMPUTATIONAL METHODS FOR FUNCTIONAL ANALYSIS OF GENE EXPRESSION
COMPUTATIONAL METHODS FOR FUNCTIONAL ANALYSIS OF GENE EXPRESSIONCOMPUTATIONAL METHODS FOR FUNCTIONAL ANALYSIS OF GENE EXPRESSION
COMPUTATIONAL METHODS FOR FUNCTIONAL ANALYSIS OF GENE EXPRESSIONcsandit
 
WI-FI FINGERPRINT-BASED APPROACH TO SECURING THE CONNECTED VEHICLE AGAINST WI...
WI-FI FINGERPRINT-BASED APPROACH TO SECURING THE CONNECTED VEHICLE AGAINST WI...WI-FI FINGERPRINT-BASED APPROACH TO SECURING THE CONNECTED VEHICLE AGAINST WI...
WI-FI FINGERPRINT-BASED APPROACH TO SECURING THE CONNECTED VEHICLE AGAINST WI...csandit
 
How to Make Awesome SlideShares: Tips & Tricks
How to Make Awesome SlideShares: Tips & TricksHow to Make Awesome SlideShares: Tips & Tricks
How to Make Awesome SlideShares: Tips & TricksSlideShare
 
Getting Started With SlideShare
Getting Started With SlideShareGetting Started With SlideShare
Getting Started With SlideShareSlideShare
 

Viewers also liked (6)

ALTERNATIVES TO BETWEENNESS CENTRALITY: A MEASURE OF CORRELATION COEFFICIENT
ALTERNATIVES TO BETWEENNESS CENTRALITY: A MEASURE OF CORRELATION COEFFICIENTALTERNATIVES TO BETWEENNESS CENTRALITY: A MEASURE OF CORRELATION COEFFICIENT
ALTERNATIVES TO BETWEENNESS CENTRALITY: A MEASURE OF CORRELATION COEFFICIENT
 
THE IMPACT OF EXISTING SOUTH AFRICAN ICT POLICIES AND REGULATORY LAWS ON CLOU...
THE IMPACT OF EXISTING SOUTH AFRICAN ICT POLICIES AND REGULATORY LAWS ON CLOU...THE IMPACT OF EXISTING SOUTH AFRICAN ICT POLICIES AND REGULATORY LAWS ON CLOU...
THE IMPACT OF EXISTING SOUTH AFRICAN ICT POLICIES AND REGULATORY LAWS ON CLOU...
 
COMPUTATIONAL METHODS FOR FUNCTIONAL ANALYSIS OF GENE EXPRESSION
COMPUTATIONAL METHODS FOR FUNCTIONAL ANALYSIS OF GENE EXPRESSIONCOMPUTATIONAL METHODS FOR FUNCTIONAL ANALYSIS OF GENE EXPRESSION
COMPUTATIONAL METHODS FOR FUNCTIONAL ANALYSIS OF GENE EXPRESSION
 
WI-FI FINGERPRINT-BASED APPROACH TO SECURING THE CONNECTED VEHICLE AGAINST WI...
WI-FI FINGERPRINT-BASED APPROACH TO SECURING THE CONNECTED VEHICLE AGAINST WI...WI-FI FINGERPRINT-BASED APPROACH TO SECURING THE CONNECTED VEHICLE AGAINST WI...
WI-FI FINGERPRINT-BASED APPROACH TO SECURING THE CONNECTED VEHICLE AGAINST WI...
 
How to Make Awesome SlideShares: Tips & Tricks
How to Make Awesome SlideShares: Tips & TricksHow to Make Awesome SlideShares: Tips & Tricks
How to Make Awesome SlideShares: Tips & Tricks
 
Getting Started With SlideShare
Getting Started With SlideShareGetting Started With SlideShare
Getting Started With SlideShare
 

Similar to RECOGNITION OF RECAPTURED IMAGES USING PHYSICAL BASED FEATURES

IMPROVING IMAGE RESOLUTION THROUGH THE CRA ALGORITHM INVOLVED RECYCLING PROCE...
IMPROVING IMAGE RESOLUTION THROUGH THE CRA ALGORITHM INVOLVED RECYCLING PROCE...IMPROVING IMAGE RESOLUTION THROUGH THE CRA ALGORITHM INVOLVED RECYCLING PROCE...
IMPROVING IMAGE RESOLUTION THROUGH THE CRA ALGORITHM INVOLVED RECYCLING PROCE...cscpconf
 
A Review on Overview of Image Processing Techniques
A Review on Overview of Image Processing TechniquesA Review on Overview of Image Processing Techniques
A Review on Overview of Image Processing Techniquesijtsrd
 
A Study of Image Tampering Detection
A Study of Image Tampering DetectionA Study of Image Tampering Detection
A Study of Image Tampering DetectionIRJET Journal
 
3.introduction onwards deepa
3.introduction onwards deepa3.introduction onwards deepa
3.introduction onwards deepaSafalsha Babu
 
Passive Image Forensic Method to Detect Resampling Forgery in Digital Images
Passive Image Forensic Method to Detect Resampling Forgery in Digital ImagesPassive Image Forensic Method to Detect Resampling Forgery in Digital Images
Passive Image Forensic Method to Detect Resampling Forgery in Digital Imagesiosrjce
 
Evaluation Of Proposed Design And Necessary Corrective Action
Evaluation Of Proposed Design And Necessary Corrective ActionEvaluation Of Proposed Design And Necessary Corrective Action
Evaluation Of Proposed Design And Necessary Corrective ActionSandra Arveseth
 
Image Processing By SAIKIRAN PANJALA
 Image Processing By SAIKIRAN PANJALA Image Processing By SAIKIRAN PANJALA
Image Processing By SAIKIRAN PANJALASaikiran Panjala
 
2015.basicsof imageanalysischapter2 (1)
2015.basicsof imageanalysischapter2 (1)2015.basicsof imageanalysischapter2 (1)
2015.basicsof imageanalysischapter2 (1)moemi1
 
A Review Paper On Image Forgery Detection In Image Processing
A Review Paper On Image Forgery Detection In Image ProcessingA Review Paper On Image Forgery Detection In Image Processing
A Review Paper On Image Forgery Detection In Image ProcessingJennifer Daniel
 
HUMAN PHOTOGRAMMETRY: FOUNDATIONAL TECHNIQUES FOR CREATIVE PRACTITIONERS
HUMAN PHOTOGRAMMETRY: FOUNDATIONAL TECHNIQUES FOR CREATIVE PRACTITIONERSHUMAN PHOTOGRAMMETRY: FOUNDATIONAL TECHNIQUES FOR CREATIVE PRACTITIONERS
HUMAN PHOTOGRAMMETRY: FOUNDATIONAL TECHNIQUES FOR CREATIVE PRACTITIONERSijcga
 
DESIGN AND ANALYSIS OF PHASE-LOCKED LOOP AND PERFORMANCE PARAMETERS
DESIGN AND ANALYSIS OF PHASE-LOCKED LOOP AND PERFORMANCE PARAMETERSDESIGN AND ANALYSIS OF PHASE-LOCKED LOOP AND PERFORMANCE PARAMETERS
DESIGN AND ANALYSIS OF PHASE-LOCKED LOOP AND PERFORMANCE PARAMETERSIJMEJournal1
 
HUMAN PHOTOGRAMMETRY: FOUNDATIONAL TECHNIQUES FOR CREATIVE PRACTITIONERS
HUMAN PHOTOGRAMMETRY: FOUNDATIONAL TECHNIQUES FOR CREATIVE PRACTITIONERSHUMAN PHOTOGRAMMETRY: FOUNDATIONAL TECHNIQUES FOR CREATIVE PRACTITIONERS
HUMAN PHOTOGRAMMETRY: FOUNDATIONAL TECHNIQUES FOR CREATIVE PRACTITIONERSijcga
 
Human Photogrammetry: Foundational Techniques for Creative Practitioners
Human Photogrammetry: Foundational Techniques for Creative PractitionersHuman Photogrammetry: Foundational Techniques for Creative Practitioners
Human Photogrammetry: Foundational Techniques for Creative Practitionersijcga
 
General Review Of Algorithms Presented For Image Segmentation
General Review Of Algorithms Presented For Image SegmentationGeneral Review Of Algorithms Presented For Image Segmentation
General Review Of Algorithms Presented For Image SegmentationMelissa Moore
 
application of digital image processing and methods
application of digital image processing and methodsapplication of digital image processing and methods
application of digital image processing and methodsSIRILsam
 

Similar to RECOGNITION OF RECAPTURED IMAGES USING PHYSICAL BASED FEATURES (20)

IMPROVING IMAGE RESOLUTION THROUGH THE CRA ALGORITHM INVOLVED RECYCLING PROCE...
IMPROVING IMAGE RESOLUTION THROUGH THE CRA ALGORITHM INVOLVED RECYCLING PROCE...IMPROVING IMAGE RESOLUTION THROUGH THE CRA ALGORITHM INVOLVED RECYCLING PROCE...
IMPROVING IMAGE RESOLUTION THROUGH THE CRA ALGORITHM INVOLVED RECYCLING PROCE...
 
A Review on Overview of Image Processing Techniques
A Review on Overview of Image Processing TechniquesA Review on Overview of Image Processing Techniques
A Review on Overview of Image Processing Techniques
 
A Study of Image Tampering Detection
A Study of Image Tampering DetectionA Study of Image Tampering Detection
A Study of Image Tampering Detection
 
3.introduction onwards deepa
3.introduction onwards deepa3.introduction onwards deepa
3.introduction onwards deepa
 
Passive Image Forensic Method to Detect Resampling Forgery in Digital Images
Passive Image Forensic Method to Detect Resampling Forgery in Digital ImagesPassive Image Forensic Method to Detect Resampling Forgery in Digital Images
Passive Image Forensic Method to Detect Resampling Forgery in Digital Images
 
F017374752
F017374752F017374752
F017374752
 
Ch1.pptx
Ch1.pptxCh1.pptx
Ch1.pptx
 
Evaluation Of Proposed Design And Necessary Corrective Action
Evaluation Of Proposed Design And Necessary Corrective ActionEvaluation Of Proposed Design And Necessary Corrective Action
Evaluation Of Proposed Design And Necessary Corrective Action
 
Image Processing By SAIKIRAN PANJALA
 Image Processing By SAIKIRAN PANJALA Image Processing By SAIKIRAN PANJALA
Image Processing By SAIKIRAN PANJALA
 
2015.basicsof imageanalysischapter2 (1)
2015.basicsof imageanalysischapter2 (1)2015.basicsof imageanalysischapter2 (1)
2015.basicsof imageanalysischapter2 (1)
 
A Review Paper On Image Forgery Detection In Image Processing
A Review Paper On Image Forgery Detection In Image ProcessingA Review Paper On Image Forgery Detection In Image Processing
A Review Paper On Image Forgery Detection In Image Processing
 
Basics of Image processing
Basics of Image processingBasics of Image processing
Basics of Image processing
 
Jc3416551658
Jc3416551658Jc3416551658
Jc3416551658
 
G010245056
G010245056G010245056
G010245056
 
HUMAN PHOTOGRAMMETRY: FOUNDATIONAL TECHNIQUES FOR CREATIVE PRACTITIONERS
HUMAN PHOTOGRAMMETRY: FOUNDATIONAL TECHNIQUES FOR CREATIVE PRACTITIONERSHUMAN PHOTOGRAMMETRY: FOUNDATIONAL TECHNIQUES FOR CREATIVE PRACTITIONERS
HUMAN PHOTOGRAMMETRY: FOUNDATIONAL TECHNIQUES FOR CREATIVE PRACTITIONERS
 
DESIGN AND ANALYSIS OF PHASE-LOCKED LOOP AND PERFORMANCE PARAMETERS
DESIGN AND ANALYSIS OF PHASE-LOCKED LOOP AND PERFORMANCE PARAMETERSDESIGN AND ANALYSIS OF PHASE-LOCKED LOOP AND PERFORMANCE PARAMETERS
DESIGN AND ANALYSIS OF PHASE-LOCKED LOOP AND PERFORMANCE PARAMETERS
 
HUMAN PHOTOGRAMMETRY: FOUNDATIONAL TECHNIQUES FOR CREATIVE PRACTITIONERS
HUMAN PHOTOGRAMMETRY: FOUNDATIONAL TECHNIQUES FOR CREATIVE PRACTITIONERSHUMAN PHOTOGRAMMETRY: FOUNDATIONAL TECHNIQUES FOR CREATIVE PRACTITIONERS
HUMAN PHOTOGRAMMETRY: FOUNDATIONAL TECHNIQUES FOR CREATIVE PRACTITIONERS
 
Human Photogrammetry: Foundational Techniques for Creative Practitioners
Human Photogrammetry: Foundational Techniques for Creative PractitionersHuman Photogrammetry: Foundational Techniques for Creative Practitioners
Human Photogrammetry: Foundational Techniques for Creative Practitioners
 
General Review Of Algorithms Presented For Image Segmentation
General Review Of Algorithms Presented For Image SegmentationGeneral Review Of Algorithms Presented For Image Segmentation
General Review Of Algorithms Presented For Image Segmentation
 
application of digital image processing and methods
application of digital image processing and methodsapplication of digital image processing and methods
application of digital image processing and methods
 

Recently uploaded

Q-Factor General Quiz-7th April 2024, Quiz Club NITW
Q-Factor General Quiz-7th April 2024, Quiz Club NITWQ-Factor General Quiz-7th April 2024, Quiz Club NITW
Q-Factor General Quiz-7th April 2024, Quiz Club NITWQuiz Club NITW
 
Decoding the Tweet _ Practical Criticism in the Age of Hashtag.pptx
Decoding the Tweet _ Practical Criticism in the Age of Hashtag.pptxDecoding the Tweet _ Practical Criticism in the Age of Hashtag.pptx
Decoding the Tweet _ Practical Criticism in the Age of Hashtag.pptxDhatriParmar
 
31 ĐỀ THI THỬ VÀO LỚP 10 - TIẾNG ANH - FORM MỚI 2025 - 40 CÂU HỎI - BÙI VĂN V...
31 ĐỀ THI THỬ VÀO LỚP 10 - TIẾNG ANH - FORM MỚI 2025 - 40 CÂU HỎI - BÙI VĂN V...31 ĐỀ THI THỬ VÀO LỚP 10 - TIẾNG ANH - FORM MỚI 2025 - 40 CÂU HỎI - BÙI VĂN V...
31 ĐỀ THI THỬ VÀO LỚP 10 - TIẾNG ANH - FORM MỚI 2025 - 40 CÂU HỎI - BÙI VĂN V...Nguyen Thanh Tu Collection
 
Unraveling Hypertext_ Analyzing Postmodern Elements in Literature.pptx
Unraveling Hypertext_ Analyzing  Postmodern Elements in  Literature.pptxUnraveling Hypertext_ Analyzing  Postmodern Elements in  Literature.pptx
Unraveling Hypertext_ Analyzing Postmodern Elements in Literature.pptxDhatriParmar
 
Q4-PPT-Music9_Lesson-1-Romantic-Opera.pptx
Q4-PPT-Music9_Lesson-1-Romantic-Opera.pptxQ4-PPT-Music9_Lesson-1-Romantic-Opera.pptx
Q4-PPT-Music9_Lesson-1-Romantic-Opera.pptxlancelewisportillo
 
ClimART Action | eTwinning Project
ClimART Action    |    eTwinning ProjectClimART Action    |    eTwinning Project
ClimART Action | eTwinning Projectjordimapav
 
How to Fix XML SyntaxError in Odoo the 17
How to Fix XML SyntaxError in Odoo the 17How to Fix XML SyntaxError in Odoo the 17
How to Fix XML SyntaxError in Odoo the 17Celine George
 
Oppenheimer Film Discussion for Philosophy and Film
Oppenheimer Film Discussion for Philosophy and FilmOppenheimer Film Discussion for Philosophy and Film
Oppenheimer Film Discussion for Philosophy and FilmStan Meyer
 
MS4 level being good citizen -imperative- (1) (1).pdf
MS4 level   being good citizen -imperative- (1) (1).pdfMS4 level   being good citizen -imperative- (1) (1).pdf
MS4 level being good citizen -imperative- (1) (1).pdfMr Bounab Samir
 
Visit to a blind student's school🧑‍🦯🧑‍🦯(community medicine)
Visit to a blind student's school🧑‍🦯🧑‍🦯(community medicine)Visit to a blind student's school🧑‍🦯🧑‍🦯(community medicine)
Visit to a blind student's school🧑‍🦯🧑‍🦯(community medicine)lakshayb543
 
Blowin' in the Wind of Caste_ Bob Dylan's Song as a Catalyst for Social Justi...
Blowin' in the Wind of Caste_ Bob Dylan's Song as a Catalyst for Social Justi...Blowin' in the Wind of Caste_ Bob Dylan's Song as a Catalyst for Social Justi...
Blowin' in the Wind of Caste_ Bob Dylan's Song as a Catalyst for Social Justi...DhatriParmar
 
4.11.24 Poverty and Inequality in America.pptx
4.11.24 Poverty and Inequality in America.pptx4.11.24 Poverty and Inequality in America.pptx
4.11.24 Poverty and Inequality in America.pptxmary850239
 
INTRODUCTION TO CATHOLIC CHRISTOLOGY.pptx
INTRODUCTION TO CATHOLIC CHRISTOLOGY.pptxINTRODUCTION TO CATHOLIC CHRISTOLOGY.pptx
INTRODUCTION TO CATHOLIC CHRISTOLOGY.pptxHumphrey A Beña
 
Active Learning Strategies (in short ALS).pdf
Active Learning Strategies (in short ALS).pdfActive Learning Strategies (in short ALS).pdf
Active Learning Strategies (in short ALS).pdfPatidar M
 
Scientific Writing :Research Discourse
Scientific  Writing :Research  DiscourseScientific  Writing :Research  Discourse
Scientific Writing :Research DiscourseAnita GoswamiGiri
 
Team Lead Succeed – Helping you and your team achieve high-performance teamwo...
Team Lead Succeed – Helping you and your team achieve high-performance teamwo...Team Lead Succeed – Helping you and your team achieve high-performance teamwo...
Team Lead Succeed – Helping you and your team achieve high-performance teamwo...Association for Project Management
 
Measures of Position DECILES for ungrouped data
Measures of Position DECILES for ungrouped dataMeasures of Position DECILES for ungrouped data
Measures of Position DECILES for ungrouped dataBabyAnnMotar
 

Recently uploaded (20)

Q-Factor General Quiz-7th April 2024, Quiz Club NITW
Q-Factor General Quiz-7th April 2024, Quiz Club NITWQ-Factor General Quiz-7th April 2024, Quiz Club NITW
Q-Factor General Quiz-7th April 2024, Quiz Club NITW
 
Decoding the Tweet _ Practical Criticism in the Age of Hashtag.pptx
Decoding the Tweet _ Practical Criticism in the Age of Hashtag.pptxDecoding the Tweet _ Practical Criticism in the Age of Hashtag.pptx
Decoding the Tweet _ Practical Criticism in the Age of Hashtag.pptx
 
prashanth updated resume 2024 for Teaching Profession
prashanth updated resume 2024 for Teaching Professionprashanth updated resume 2024 for Teaching Profession
prashanth updated resume 2024 for Teaching Profession
 
31 ĐỀ THI THỬ VÀO LỚP 10 - TIẾNG ANH - FORM MỚI 2025 - 40 CÂU HỎI - BÙI VĂN V...
31 ĐỀ THI THỬ VÀO LỚP 10 - TIẾNG ANH - FORM MỚI 2025 - 40 CÂU HỎI - BÙI VĂN V...31 ĐỀ THI THỬ VÀO LỚP 10 - TIẾNG ANH - FORM MỚI 2025 - 40 CÂU HỎI - BÙI VĂN V...
31 ĐỀ THI THỬ VÀO LỚP 10 - TIẾNG ANH - FORM MỚI 2025 - 40 CÂU HỎI - BÙI VĂN V...
 
INCLUSIVE EDUCATION PRACTICES FOR TEACHERS AND TRAINERS.pptx
INCLUSIVE EDUCATION PRACTICES FOR TEACHERS AND TRAINERS.pptxINCLUSIVE EDUCATION PRACTICES FOR TEACHERS AND TRAINERS.pptx
INCLUSIVE EDUCATION PRACTICES FOR TEACHERS AND TRAINERS.pptx
 
Unraveling Hypertext_ Analyzing Postmodern Elements in Literature.pptx
Unraveling Hypertext_ Analyzing  Postmodern Elements in  Literature.pptxUnraveling Hypertext_ Analyzing  Postmodern Elements in  Literature.pptx
Unraveling Hypertext_ Analyzing Postmodern Elements in Literature.pptx
 
Q4-PPT-Music9_Lesson-1-Romantic-Opera.pptx
Q4-PPT-Music9_Lesson-1-Romantic-Opera.pptxQ4-PPT-Music9_Lesson-1-Romantic-Opera.pptx
Q4-PPT-Music9_Lesson-1-Romantic-Opera.pptx
 
ClimART Action | eTwinning Project
ClimART Action    |    eTwinning ProjectClimART Action    |    eTwinning Project
ClimART Action | eTwinning Project
 
How to Fix XML SyntaxError in Odoo the 17
How to Fix XML SyntaxError in Odoo the 17How to Fix XML SyntaxError in Odoo the 17
How to Fix XML SyntaxError in Odoo the 17
 
Oppenheimer Film Discussion for Philosophy and Film
Oppenheimer Film Discussion for Philosophy and FilmOppenheimer Film Discussion for Philosophy and Film
Oppenheimer Film Discussion for Philosophy and Film
 
MS4 level being good citizen -imperative- (1) (1).pdf
MS4 level   being good citizen -imperative- (1) (1).pdfMS4 level   being good citizen -imperative- (1) (1).pdf
MS4 level being good citizen -imperative- (1) (1).pdf
 
Visit to a blind student's school🧑‍🦯🧑‍🦯(community medicine)
Visit to a blind student's school🧑‍🦯🧑‍🦯(community medicine)Visit to a blind student's school🧑‍🦯🧑‍🦯(community medicine)
Visit to a blind student's school🧑‍🦯🧑‍🦯(community medicine)
 
Blowin' in the Wind of Caste_ Bob Dylan's Song as a Catalyst for Social Justi...
Blowin' in the Wind of Caste_ Bob Dylan's Song as a Catalyst for Social Justi...Blowin' in the Wind of Caste_ Bob Dylan's Song as a Catalyst for Social Justi...
Blowin' in the Wind of Caste_ Bob Dylan's Song as a Catalyst for Social Justi...
 
4.11.24 Poverty and Inequality in America.pptx
4.11.24 Poverty and Inequality in America.pptx4.11.24 Poverty and Inequality in America.pptx
4.11.24 Poverty and Inequality in America.pptx
 
Paradigm shift in nursing research by RS MEHTA
Paradigm shift in nursing research by RS MEHTAParadigm shift in nursing research by RS MEHTA
Paradigm shift in nursing research by RS MEHTA
 
INTRODUCTION TO CATHOLIC CHRISTOLOGY.pptx
INTRODUCTION TO CATHOLIC CHRISTOLOGY.pptxINTRODUCTION TO CATHOLIC CHRISTOLOGY.pptx
INTRODUCTION TO CATHOLIC CHRISTOLOGY.pptx
 
Active Learning Strategies (in short ALS).pdf
Active Learning Strategies (in short ALS).pdfActive Learning Strategies (in short ALS).pdf
Active Learning Strategies (in short ALS).pdf
 
Scientific Writing :Research Discourse
Scientific  Writing :Research  DiscourseScientific  Writing :Research  Discourse
Scientific Writing :Research Discourse
 
Team Lead Succeed – Helping you and your team achieve high-performance teamwo...
Team Lead Succeed – Helping you and your team achieve high-performance teamwo...Team Lead Succeed – Helping you and your team achieve high-performance teamwo...
Team Lead Succeed – Helping you and your team achieve high-performance teamwo...
 
Measures of Position DECILES for ungrouped data
Measures of Position DECILES for ungrouped dataMeasures of Position DECILES for ungrouped data
Measures of Position DECILES for ungrouped data
 

RECOGNITION OF RECAPTURED IMAGES USING PHYSICAL BASED FEATURES

  • 1. Jan Zizka et al. (Eds) : ACSTY, NATP - 2016 pp. 15– 31, 2016. © CS & IT-CSCP 2016 DOI : 10.5121/csit.2016.61402 RECOGNITION OF RECAPTURED IMAGES USING PHYSICAL BASED FEATURES S. A. A. H. Samaraweera1 and B. Mayurathan2 1 Department of Computer Science, University of Jaffna, Sri Lanka anuash119@gmail.com 2 Department of Computer Science, University of Jaffna, Sri Lanka barathy@jfn.ac.lk ABSTRACT With the development of multimedia technology and digital devices, it is very simple and easier to recapture a high quality images from LCD screens. In authentication, the use of such recaptured images can be very dangerous. So, it is very important to recognize the recaptured images in order to increase authenticity. Image recapture detection (IRD) is to distinguish real- scene images from the recaptured ones. An image recapture detection method based on set of physical based features is proposed in this paper, which uses combination of low-level features including texture, HSV colour and blurriness. Twenty six dimensions of features are extracted to train a support vector machine classifier with linear kernel. The experimental results show that the proposed method is efficient with good recognition rate of distinguishing real scene images from the recaptured ones. The proposed method also possesses low dimensional features compared to the state-of-the-art recaptured methods. KEYWORDS Image Recapture Detection, Texture, HSV, Blurriness & Support Vector Machine 1. INTRODUCTION Since the last century, the information technology is increasing rapidly. The digital documents are replacing paper documents. However, a photograph implies truthfulness. This technology enables digital documents to be easily modified and converted which makes our life easier in digital matters. Unlike a text, an image accomplishes an effective communication channel for humans. Hence, maintenance of the trustfulness of a digital image is a major challenge in today’s world. Recaptured images means, it is different from the common photographs in that what being captured is an image reproduction surface instead of a general scene. Image recapture detection technique distinguishes real images from recaptured images. i.e.) images from media that displays real-scene images such as printed pictures or LCD display. Difficulties of recognizing recaptured images can be described using Figure 1. Here (a) and (b) are real images, (c) and (d) are recaptured images. It is extremely complicated task for an artificial system to recognize recaptured images from real ones.
  • 2. 16 Computer Science & Information Technology (CS & IT) In recent years, considerable amount of researches are conducting for image recaptured detection to restore the trustworthiness of digital images [1], [2], [3]. Using the image recapturing process it is possible to restore the intrinsic image regularities automatically and to remove some common tampering anomalies automatically. An important task for the current image forensic system is the recognition of the recaptured images. Apart from that, an image forensic system can detect rebroadcast attacks on a biometric identification system. Therefore we study the problem of recaptured image detection as an application in image forensics. Figure 1. Difficulties of recognizing recaptured images In other hand, face authentication systems are designed with aliveness detection for verifying a live face on mobile devices such as laptop computers and smart phones. For such systems faked identity through recapturing of a faked print face photo has become a big issue. In robot vision, differentiating the objects on a poster from the real ones is more intelligence. IRD is also useful for that purpose. Another important application for IRD is in composite image detection. One way to cover composition in composite image is to recapture it. The process of producing the real scene images and the corresponding recaptured images are shown in Figure 2. As shown in Figure 2. (a) the real image can be obtained through any type of camera. For the reproduction process, initially the real image is captured by the any type of camera. Then it is reproduced using different types of printing or display media such as printed on an office A4 paper using a colour laser printer or displayed on a LCD screen of a PC etc. Finally, the recaptured image is obtained through the camera. Displaying or printing a scene on any type of physical media, lead to poor quality of recaptured image. We can easily identify some artefacts like texture pattern, colour fading etc. As shown in Figure 3.the low-quality recaptured images can be easily identified by the human eyes.
  • 3. Computer Science & Information Technology (CS & IT) 17 Figure 2. The process of producing real image and recaptured image Figure 3. Comparison of a real image (a) and a recaptured image (b) For instance, consider the displaying on LCD screen as the reproduction process as illustrated in Figure 4. Cao and Kot [5] compared real images and corresponding recaptured images with a large number of controllable settings including camera settings, LCD settings and environmental settings. They concluded that visual quality of these finely recaptured images is significantly better than the casually recaptured images. So, this is a big opportunity for forgers to recapture the artificially generated scenery and use the recaptured image to fool image forensic system. Recently, a Vietnamese security group found that most commercial laptop computers with face authentication system can be easily attacked by just presenting a human face printed on an A4- size paper [6].
  • 4. 18 Computer Science & Information Technology (CS & IT) Figure 4. Some controllable settings for reproduction process on a LCD screen 2. LITERATURE REVIEW This section includes several approaches which are used to identify the recaptured images from real scene images as well as the studies which are related with distinguishing real scene images and recaptured images on the printing paper and LCD screens respectively. Xinting Gao et al., [1] introduced a physics-based approach for recaptured image detection. The set of physics-based features is composed of the contextual background information, the spatial distribution of specularity that is related to the surface geometry, the image gradient that captures the non-linearity in the recaptured image rendering process, the colour information and contrast that is related to quality of reproduction rendering, and a blurriness measure that is related to the recapturing process. These features were used to classify the recaptured images from the real ones. This achieved significantly better classification performance on the low resolution images as compared to the wavelet statistical features. Ke et al., [2] proposed an image recapture detection method based on multiple feature descriptors. It uses combinations of low dimensional features including texture feature, noise feature, difference of histogram feature and colour feature. The experimental result has demonstrated that this method is efficient with good detection rate of distinguishing real scene images from the recaptured ones. It possesses low time complexity. Hany Faridy and Siwei Lyu [4] presented a statistical model with first and higher order statistics which capture certain statistical regularities of natural images. Hang Yu et al., [3] brought up a cascaded dichromatic model with the high frequency spatial variations in the specular component of a recaptured image. This distinctive feature is a result of the micro-structure on the printing paper. With a probabilistic support vector machine classifier, Cao and Kot [5] classified recaptured images on LCD screens from natural images. They perform the experiment using three types of features including texture feature using Local Binary Pattern, multi-scale wavelet statistics and colour feature.
  • 5. Computer Science & Information Technology (CS & IT) 19 3. DATASET Bai et al., [7] found that the image resolution affects the performance of the algorithms. So, XintingGao et al., [8] presented smart phone recapture image database taken by smart phone cameras. Even though there are some publically available databases, I used this database due to the general resolution of the images is set to VGA (640 x 480). This database has constructed using following criteria. • The images are in pair for the real image and the recaptured one taken by the same end- user camera. • The images are consisting of outdoor natural scene, indoor office or residence scene and close-up or distant scene 3.1. Real Image Dataset The real images are obtained by any type of camera as shown in Figure 1 (a). The images in the real image dataset have produced using three popular brands of smart phones including Acer M900, Nokia N95 and HP iPAQ hw6960. These camera phones are set to auto mode whenever possible. All these three types of phones have back-facing camera. Totally I used 1094 images as real images. Table 1 lists total number of images taken from different brands of camera. Table 1. The number of real images. Types Images Acer B 407 HP B 369 Nokia B 318 Total 1094 3.2. Recaptured Image Dataset As illustrated in Figure 1 (b), the reproduction process is pure image-based. The images in the recaptured image dataset have produced using three types of DSLR (digital single-lens reflex) cameras including Nikon D90, Canon EOS 450D and Olympus E-520. These cameras are set to auto mode whenever possible and the resulting images have saved in JPEG format. The DSLR cameras have high resolution (greater than 3000x2000 pixels) and high quality. In constructing the recaptured dataset it has used two types of reproduction processes such as printing on a paper and displaying on a screen. The images are printed on an A4-size office paper using HP CP3505dn laser printer and Xerox Phaser 8400 ink printer. They have printed into 4R glossy and matte photos too. On the other hand, for LCD screen display they have used Dell 2007FP LCD screen (1600 x 1200 pixels). Finally the reproduced image has recaptured by the above mentioned camera phones. Table 2 lists the number of recaptured images in each reproduction process. Totally I used 1137 images as recaptured images.
  • 6. 20 Computer Science & Information Technology (CS & IT) Table 2. The number of recaptured images. 4. METHODOLOGY In this paper, I propose an image recaptured detection method based on physical based features. A working diagram of my proposed method is illustrated in Figure 5. The images in the real image dataset and the recaptured image dataset are used for the Feature Extraction step. For each image, features including Texture, HSV colour and Blurriness are extracted. Then to train the SVM classifier, both features and labels are used. This is the training procedure in my method. In the testing procedure, the features in the testing image are extracted. Then the SVM classifier classifies those features as the features of either a real image or a recaptured image. 4.1. Feature Extraction In general, the recaptured images and corresponding real images will never be same due to the direction of the light, distance between the camera and the scenery, sensor resolution, the lens quality and so forth. By considering this problem as a binary classification task, I introduce following three types of features including Texture, HSV colour and Blurriness to differentiate the recaptured images from real images.
  • 7. Computer Science & Information Technology (CS & IT) 21 Figure 5. Diagram for the proposed image recaptured detection method 4.1.1. Texture feature Figure 6. LBP and CS-LBP features for a neighbourhood of 8 pixels
  • 8. 22 Computer Science & Information Technology (CS & IT) In computer vision applications, Texture plays an important role. During the past decades, so many numbers of algorithms have been presented for texture feature extraction. They can be mainly divided into two approaches: Statistical approaches and Structural approaches. Among them most commonly used algorithms are Gabor filters, Wavelet transform and so forth. Currently the local binary pattern (LBP) has received a considerable attention in many applications as a Statistical approach [9]. Due to the high dimensionality of the LBP operator, now new experiments are carrying on with the centre-symmetric local binary pattern (CS-LBP) which is an extension of LBP operator. Not only dimensionality reduction, the CS-LBP captures better the gradient information than the basic LBP. Since the CS-LBP descriptor is computationally simple, effective and robust for various image transformations, it is very important to present a brief review of the CS-LBP. CS-LBP operator [10] initially leads by the LBP operator. Histograms of the LBP operator are long (256) and it is not robust in flat images. CS-LBP was proposed to reduce these drawbacks. The LBP operator compares each pixel with the centre pixel. Instead of that, the CS-LBP operator compares centre-symmetric pairs of pixels as illustrated in Figure 6. For the same number of neighbours, it produces half number of comparisons. So that the LBP produces 256 (28 ) different binary patterns, whereas the CS-LBP produces only 16 (24 ) different pattern for 8 neighbours. For flat areas, the operator’s robustness can be increased using the gray level differences that are threshold at a small value T. Thus, the CS-LBP operator is defined by Eq. (1). 4.1.2. Colour feature In the reproducing stage, the reproduction devices introduce some tint into the reproduced images. And also, the lighting can reduce the contrast and saturation of a recaptured image. So, the colour feature of a recaptured image looks different from its original image as shown in Figure 7.
  • 9. Computer Science & Information Technology (CS & IT) 23 Figure 7. Comparison of the colour features introduced by the reproduction process Colour model describes colours. Usually colour models represent a colour in the form of tuples (generally of three). The purpose of a colour model is to facilitate the specification of colours in a certain way and common standard. The RGB colour model is the most common colour model for digital images. Because it retains compatibility with computer displays. However RGB has some drawbacks. RGB is non-useful for objects specification and recognition of colours. It is difficult to determine specific colour in RGB model. It reflects the use of CRTs, since it is hardware oriented system. Apart from RGB the HSV colour model is commonly used in colour image retrieval system, since HSV colours are defined easily by human perception not like RGB. The HSV stands for the Hue, Saturation, and Value. The coordinate system is in a hexagon in Figure 8. (a). And Figure 8. (b) shows a view of the HSV colour model. The Value represents intensity of a colour, which is decoupled from the colour information in the represented image. The hue and saturation components are intimately related to the way human eye perceives colour resulting in image processing algorithms with physiological basis. As hue varies from 0 to 1.0, the corresponding colours vary from red, through yellow, green, cyan, blue, and magenta, back to red, so that there are actually red values both at 0 and 1.0. As saturation varies from 0 to 1.0, the corresponding colours (hues) vary from unsaturated (shades of gray) to fully saturated (no white component). As value, or brightness, varies from 0 to 1.0, the corresponding colours become increasingly brighter. Figure 8. (a) HSV Cartesian Coordinate System (b) HSV colour model Colour histogram and colour moments are widely used to represent the colour information of an image. Colour histogram is the approach more frequently adopted for Content Based Image Retrieval Systems. It describes the frequency of colours in images. Even though it is a widely used feature, it has some disadvantages associated with it. It is sensitive to noisy interferences.
  • 10. 24 Computer Science & Information Technology (CS & IT) Small change in image might result in large change in histogram values and it is computationally expensive. Colour moments are measures that can be used to differentiate images based on their features of colour. The assumption of the basis of colour moments is that the distribution of colour in an image can be interpreted as a probability distribution. Probability distributions are characterized by a number of unique moments. For example normal distributions are differentiated by their mean and variance. Therefore it follows that if the colour in an image follows a certain probability distribution, the moments of that distribution can then be used as features to identify that image based on colour. The mean, standard deviation and Skewness of an image are known as colour moments. In HSV colour model, a colour is defined by 3 values: Hue, Saturation, and Value. Colour moments are calculated for each of these channels in an image. An image therefore is characterized by 9 moments: 3 moments for each 3 colour channels. We will define the ith colour channel at the jth image pixel as pij. The three colour moments can be defined as: • Moment 1- Mean: Mean can be described as the average colour value in the image • Moment 2- Standard Deviation: The standard deviation is the square root of the variance of the distribution • Moment 3- Skewness: Skewness can be described as a measure of the degree of asymmetry in the distribution. For example in HSV colour space, the variable i can take values from 1 to 3 (i.e. 1=H, 2=S, 3=V). So, the resultant feature for the image contains 9 values in the form of 3x3 matrix of the following format: Where:
  • 11. Computer Science & Information Technology (CS & IT) 25 E11,E12,E13 represents Mean value for HSV. σ11,σ12,σ 13 represents Standard deviation value for HSV. s11,s12,s13 represents Skewness value for HSV. 4.1.3. Blurriness In a recaptured image, there are three key factors that blurriness can arise: • The first capture device or the printing device could be of low resolution. • The display medium may not be in the focus range of the camera due to specific recaptured settings. • If the end user camera has a limited depth of field, the distance background may be blurring, while the entire display medium is in focus. Figure 9. (a) and (b) In this research work, I explore such information as a distinguishing feature in order to recognize whether an image is a real scene or recaptured image. The new method based on the discrimination between different levels of blur perceptible on the same image proposed by Crete et al. [11] have been used to calculate a no-reference perceptual blur metric, ranging from 0 to 1 which are respectively the best and the worst quality in term of blur perception as shown in Figure 9. (a) and (b) respectively. Figure 10. Simplified flow chart of the blur estimation principle As shown in Figure 10in the first step the intensity variations between neighbouring pixels of the input image is computed. Then a low-pass filter is applied and computed the variations between the neighbouring pixels. Then, the comparison between these intensity variations allows us to
  • 12. 26 Computer Science & Information Technology (CS & IT) evaluate the blur annoyance. Thus, a high variation between the original and the blurred image means that the original image is sharp whereas a slight variation between the original and the blurred image means that the original image is already blurred. 4.2. Classification Classification consists of predicting a certain outcome based on a given input. Among various classification techniques, Support Vector Machine (SVM) is originally developed for solving binary classification problems [12]. Figure 11. An example of a separable problem in a 2 dimensional space Consider the Figure 11as an example of a linearly separable problem. Suppose we are given a set of l training points of the form: We thus try to find a classification boundary function f(x) = y that not only correctly classifies the input patterns in the training data but also correctly classifies the unseen patterns. The classification boundary f (x) = 0, is a hyperplane defined by its normal vector w, which basically divides the input space into the class +1 vectors on one side and the class -1 vectors on other side. Then there exists f (x) such that
  • 13. Computer Science & Information Technology (CS & IT) 27 The optimal hyperplane is defined by maximizing the distance between the hyperplane and the data points closest to the hyperplane (called support vectors). Then we need to maximize the margin y = 2/|| w|| or minimize ||w|| subject to constraint (6). This is a quadratic programming (QP) optimization problem that can be expressed as: Usually, datasets are often not linearly separable in the input space. To deal with this situation slack variables (ξi) are introduced into Eq. (8), where C is the parameter that determines the trade- off between the maximization of the margin and minimization of the classification error. Now the QP optimization problem is given by The solution to the above optimization problem has the form: where Φ (.) is the mapping function that transforms the vectors in input space to feature space. The dot product in Eq. (10) can be computed without explicitly mapping the points into feature space by using a kernel function. Here the proposed method has used the linear kernel of the form 5. EXPERIMENTAL SETUP AND TESTING RESULTS This section includes the evaluation of the proposed image recaptured detection method. In this experiment, I used smart phone recapture image database proposed by Xinting Gao et al., [8]. The real-scene images are obtained by three popular brands of smart phones including Acer M900, Nokia N95 and HP iPAQ hw6960 which have back-facing camera. The recaptured images are obtained by using three types of DSLR cameras including Nikon D90, Canon EOS 450D and Olympus E-520. For this experiment, totally 2231 images including 1094 real-scene images and 1137 recaptured images are used. In this experimental setup, three different experimental setups are performed in
  • 14. 28 Computer Science & Information Technology (CS & IT) order to demonstrate the performance of proposed method. First, the proposed method is compared with several state-of-the-art methods. Second, the performances of different combinations of features used in this paper are compared. Last, the performances of brands of smart phones are compared. 5.1. Experiment I The performance of proposed method is compared with the state-of-the-art methods. As suggested by Ke et al., [2], the whole dataset is partitioned into training and testing images as shown in Table 3 and measured the performance using accuracy. Table 3. Training and testing image selection. Twenty six dimensional low-level features including Texture, HSV colour and Blurriness are extracted from the training images. Table 4 shows the recognition rate of proposed method using different training and testing samples. Table 4. Recognition rate as accuracy with different image samples on Smart Phone Recapture Image Database [8]. Table 5 compares the performance of the proposed method with state-of-the-art methods. According to the accuracy that is shown in Table 5, it can be seen that proposed method gives similar performance compared to other methods. Table 5. Comparison of feature dimension and performance achieved.by different methods on smart phone recapture image database [8].
  • 15. Computer Science & Information Technology (CS & IT) 29 5.2. Experiment II Table 6. Dimensions and performance on the different combinations of features. In order to find out the robust feature in recognition of recaptured images, I have computed the performance of recaptured image detection method using several sets of image features such as [texture], [HSV colour], [blurriness], [texture + HSV colour], [texture + blurriness] and [HSV colour + blurriness]. The results are shown in Table 6. It is observed that the CS-LBP operator which is used to extract the texture feature is the most robust feature in recognition of recaptured images. 5.3. Experiment III In order to find out the smart phone which has a good quality capturing process, the proposed method is applied with different image samples; those are captured by three popular brands of smart phones including Acer M900, Nokia N95 and HP iPAQ hw6960. Table 7 is concluded that the proposed image recaptured detection method based on physical based features is more effective for Acer M900 than Nokia N95 and HP iPAQ hw6960 Table 7. Performance on the brands of smart phones In this experiment it is concluded that the proposed image recapture detection method has achieved a comparable classification performance on low dimensional features including texture, HSV colour and blurriness. Among them, the texture which is extracted using CS-LBP operator is crucial for the recognition problem and also the proposed method is more effective for Acer M900. 6. CONCLUSIONS In this paper, I proposed an image recapture detection method based on set of physical based features which uses combination of low-level features including texture, HSV colour and blurriness. This proposed method is efficient with good recognition rate of distinguishing real-
  • 16. 30 Computer Science & Information Technology (CS & IT) scene images from the recaptured ones. Even though the proposed method possesses low dimensional features, it works excellently in both situations where in less training as well as more training images. There is a restriction in my research work. This limitation is that the dataset is consisting with images taken only back-facing camera in three types of smart phones as I described in the section Dataset. This will be an effect for the Experiment III to propose the overall performance of brands of smart phones using the proposed method. The future work is to use the most robust feature to train two dictionaries using the K-SVD approach [13]. Using these two learned dictionaries, we would be able to determine whether a given image has been recaptured. Another work is to extract more other features and measure the performance to find the best combinations of all the features. REFERENCES [1] X. Gao, T.-T. Ng, B. Qiu&S.-F. Chang, (2010) “Single-view recaptured image detection based on physics-based features”, IEEEInternational Conference on Multimedia and Expo (ICME), pp1469-74. [2] Y. Ke, Q. Shan, F. Qin & W. Min, (2013) “Image recapture detection using multiple features”, International Journal of Multimedia and Ubiquitous Engineering, Vol. 8, No. 5, pp71-82. [3] H. Yu, T. –T. Ng&Q. Sun, (2008) “Recaptured Photo Detection Using Specularity Distribution”, IEEEInternational Conference on Image Processing, pp3140-3143. [4] H. Farid&S. Lyu (2003) “Higher-order wavelet statistics and their application to digital forensics”, IEEE Workshop on Statistical Analysis in Computer Vision. [5] H. Cao &A. C. Kot, (2010) “Identification of Recaptured Photographs on LCD Screens”, IEEEInternational Conference on Acoustics, Speech and Signal Processing, pp1790-1793. [6] D. Ngo, (2008) Vietnamese security firm:Your face is easy to fake, [Online], Available: http://news.cnet.com/8301-17938105-10110987.html [7] J. Bai, T.-T. Ng, X. Gao&Y. Q. Shi, (2010) “Is physics-based liveness detection truly possible with a single image?”,IEEEInternational Symposium on Circuits and Systems (ISCAS). [8] X. Gao, B. Qiu, J. Shen, T.-T. Ng, &Y. Q. Shi (2011) Digital Watermarking 9th International Workshop, IWDW Revised Selected Papers, pp90-104. [9] W. Xiaosheng &S. Junding, (2009) “An effective texture spectrum descriptor”,Fifth International Conference on Information Assurance and Security. [10] M. Heikkila&C. Schmid, (2009) “Description of interest regions with Local binary patterns”, Pattern Recognit., Vol. 42, No. 3, pp425-436. [11] F. Crete, T. Dolmiere, P. Ladret&M. Nicolas, (2007) “The blur effect: perception and estimation with a new no-reference perceptual blur metric”, SPIEInternational Society for Optical Engineering. [12] A. Ramanan, S. Suppharangsan &M. Niranjan, (2007) “Unbalanced Decision Tree for Multi-class Classification”, IEEE International Conference on Industrial and Information Systems (ICIIS’07), pp291-294.
  • 17. Computer Science & Information Technology (CS & IT) 31 [13] T. Thongkamwitoon, H. Muammar & P. L. Dragotti (2014) “Robust Image Recapture Detection using a K-SVD Learning Approach to train dictionaries of Edge Profiles”, IEEE International Conference on Image Processing (ICIP), pp5317-5321. AUTHORS S. A. A. H. Samaraweera received the B.Sc. Special Degree in Computer Science from University of Jaffna, Sri Lanka in 2016. Her current research interests include image processing and digital image forensic. B. Mayurathan received PhD Degree in Computer Science from University of Peradeniya, Sri Lanka in 2014. Her current research interests include Computer vision and Machine Learning.