Message Boards Message Boards

8 Replies
22 Total Likes
View groups...
Share this post:

Detecting copy-move forgery in images

Posted 11 years ago
Mathematica 9 offers new image processing capabilities.

There some quite interesting example on Matlab to detect copy-move forgery in images. I was wondering if any one had some ideas to implement such a functionality based on Mathematica 9 ?

As good basis to start with some information and Matlab code :

A SIFT-based forensic method for copy-move detection

Kind regards, Olivier
POSTED BY: olivier k
8 Replies
Dear Vitaliy,

I have opened a new discussion on the complexity measure based on compression rates: Sweeping parameter spaces using "gzip-Entropy". I hope that the post illustrates the main idea. I have also added a section on Mathematica's Compress function. It is interesting that we get similar results, because it is usually claimed that one has to use gzip (not even bzip2). I did not have time/space to explore the use of our computer cluster to sweep parameter spaces, but that is obviously very straight forward with Mathematica's HPC capabilities. I have not yet access to GPUs, but I suppose that one could use that effectively for these brute force studies.

I am quite aware of the Wolfram Science Summer School, which has generated quite some interest of students here. We will certainly look into it. It sounds very exciting. Shame that there is none of the in Europe/the UK.

Of course, I read your article on Visibility Graphs. Very intersting. I have applied it to some systems we have interest in. I have also been working a wee bit with a somewhat similar method. If I have time, I'll post something soon. 
POSTED BY: Marco Thiel
Marco, thank you for sharing this, very interesting. I also got curious about your P.S. - about estimating complexity based on compression rates. It reminded me something we do often in Wolfram Science Summer School. As a research approach we frequently explore space of algorithms and simple programs that produce complex behavior trying to access that complexity with some compression routines. By the way Mathematica has its own Compress function, which also can be used for those purposes. An example fo such project could be:

Exploring CA Rule Spaces by Means of Data Compression

I realized you do research in complex systems and thought this maybe of interest to you or your students. You, your colleagues or students maybe also interested to learn that applications are now open for the 2014 Wolfram Science Summer School. It is a very fun and educational experience I would recommend as a frommer student and a current faculty member there. Also perhaps of interest to you is a post I wrote aboute a mapping between time series and networks - so called Visibility Graphs. This gives some extra graph-theoretical tools to explore complexity of time series.
POSTED BY: Vitaliy Kaurov
Dear Sam,

yes, that can happen. The method I was adapting is called "Error Level Analysis" (ELA). I only used a very naive implementation. 

Please have a look at this website:
I can also provide references to publications if anyone is interested.

In the tutorial it says:
Scaling a picture smaller can boost high-contrast edges, making them brighter under ELA. Similarly, saving a JPEG with an Adobe product will automatically sharpen high-contrast edges and textures, making them appear much brighter than low-texture surfaces.
This might explain your observation; in fact, your image provides a beautiful illustration of that effect.

There are many obvious improvements, which are very easy to implement in Mathematica.

PS: There is, by the way, a very similar and simple trick, which one can use to estimate the complexity of time series based on compression rates. Using an algorithm such as gzip one can very crudely estimate the so called algorithmic complexity. It is very easy and fast to study the parameter space of a dynamical system and look e.g. for chaotic regions.
POSTED BY: Marco Thiel
Marco, this is such a simple but neat trick! I tried (naively) a bit different approach with resizing image, but that seems to pickup on sharper edges only, I think:
img = Import[""];
id = ImageDimensions[img];
ImageSubtract[#, ImageResize[ImageResize[#, id/5], id]] &@img

POSTED BY: Sam Carrettie
Dear all,

the following two-liner is based on the fact that the different parts of a fake image might have different compression rates in certain image formats, such as jpg etc. Loss-less formats do not work for this. 

We start with a fake jpg-image downloaded from the internet:

First I import the image from the desktop:

img1 = Import["~/Desktop/Fake.jpg"]

Then I export it with a low compression rate:
Export["~/Desktop/img1comp.jpg", img1, "CompressionLevel" -> 0.012]

I then import the new image
img2 = Import["~/Desktop/img1comp.jpg"];

and then substract the images and adjust the colour range.
ImageSubtract[img1, img2] // ImageAdjust

In the resulting image the forged parts are supposed to light up. Here only slightly; it will work better once the compression rate is adapted.

This illustrates just the general idea. There is an "aura" of brighter pixels around the object that is copied in.
Everything can be joined up into a (kind of) one-liner. I then wrap a little manipulate command around it, to vary the compression rate. In order to make this work one needs to play a bit with the compression rates, i.e. move the slider.
Export["~/Desktop/img1comp.jpg", img1, "CompressionLevel" -> comp];
img2 = Import["~/Desktop/img1comp.jpg"];
ImageSubtract[img1, img2] // ImageAdjust, {{comp, 0.05}, 0, 0.5}]

Summary: (i) this is only a very crude algorithm, but it works in some cases, (ii) Mathematica certainly has all the algorithms to detect forgery in images, (iii) the shark in the image appears to be copied in :-)

A nice game is to write a little web-crawler to download images from common social network sites and analyse how many of them are fake. Note, that this particular algorithm is not particularly useful to detect forgeries of photos on facebook or dating websites, because they are often not "copy-and-paste", but rather "increase or decrease" the size of certain parts of an image. 

Here is another example:

The output of the program is:

It is important to note that this method is not at all fool-proof; often there are false positives. 
POSTED BY: Marco Thiel

It is true that ImageKeypoints uses SURF, but ImageFeatureTrack does not. Conceptually, ImageFeatureTrack uses corners.

POSTED BY: Matthias Odisio
Posted 11 years ago
It might be worth mentioning that ImageKeypoints provided by Mathematica uses a SURF ( ) detector to find keypoints. SURF is development partially based on SIFT, which authors of the referred paper have obviously used. ImageCorrespondingPoints and ImageFeatureTrack use SURF data from ImageKeypoints to do their job.
POSTED BY: Jari Kirma
There's a lot to read there. To implement this, you will want to break your problem up into the same parts that the video shows. 

The first half appears to be some cluster analysis to identify parts of the image that are likely candidates for copied parts. Can you tell us more about how they implement this and what methods they use?

The second half is more straightforward and there are built in functions for it. This part involves finding the corresponding points of the matched parts and finding the geometric transfomation that relates them. To do this use ImageCorrespondingPoints and then pass the corresponding points to FindGeometricTransform .
POSTED BY: Sean Clarke
Reply to this discussion
Community posts can be styled and formatted using the Markdown syntax.
Reply Preview
or Discard

Group Abstract Group Abstract