Basic watermarking libraries for images and videos with python 3.
python3で画像・動画の電子透かしを行うための基本的なライブラリです.
(Edited on 2017/11/08)
You can do following things by using this library.
Input/Output of images and videos
You can input and output images or movies such as bmp, png, jpg, mp4, avi and so on.
Split images and videos
You can divide the image into blocks.
Also, you can divide video into frames as image.
Digital watermark by bit replace method
The bit replace method changes the bit of the image.
You can embed and extract secret information by bit replace method.
Humans can not recognize low bit changes.
It is a simple method, but it is vulnerable to attack.
Digital watermark using frequency domain
You can embed secret information in the frequency domain of the image using DCT transformation.
It is difficult to recognize changes in the frequency domain than in the time domain.
Digital watermark using spread spectrum
You can embed secret information using correlation properties of spread spectrum sequence.
It spreads the secret information in a wide band, so it is not affected by noise.
The spread sequence is the secret key.
You can use M-Sequence and CCC in this library.
Calculate BER or PSNR
You can calculate BER or PSNR.
These values evaluate the performance of digital watermark.
このライブラリを使用すると以下のことができます.
画像と動画の入出力
bmp,png,jpg,mp4,aviなどの画像や動画の入力と出力ができます.
画像と動画の分割
画像をブロックに分割することができます.
また,動画の各フレームを画像に分けることができます.
ビット置換法による電子透かし
ビット置換法は画像の画素を変更します.
ビット置換法による秘密情報の埋め込みと抽出ができます.
人間は低いビットの変化を認識できません.
シンプルな手法ですが,攻撃に対して脆弱です.
周波数領域利用型の電子透かし
DCT変換を使用して,画像の周波数領域に秘密情報を埋め込みます.
周波数領域の変化を認識することは,時間領域よりも難しいです.
スペクトル拡散を利用した電子透かし
スペクトル拡散系列の相関特性を利用して秘密情報を埋め込みます.
広帯域に情報を拡散するため,ノイズの影響を受けにくいです.
拡散系列は秘密鍵になります.
このライブラリでは,M系列と完全相補系列を使用することができます.
BERやPSNRの計算
BERやPSNRの計算ができます.
これらの値は電子透かしの性能を評価します.
※ If you have already installed OpenCV, you can skip this step.
Use homebrew,
$ brew install opencv3 --with-python3
$ git clone https://github.com/piraaa/VideoDigitalWatermarking.git
Put a python file in the same directory as "VideoDigitalWatermarking".
And you write only
from VideoDigitalWatermarking import *
for import.
You can use all following functions.
For more information, please install this library and refer to "VideoDigitalWatermarking/html/index.html" with your browser.
Some program samples.
Change the bit of the red layer in the time domain by the bit replace method. We use the LSB to minimize the effect on the image.
#coding: utf-8
from VideoDigitalWatermarking import *
fnin = 'test.bmp'
fnout = 'test_embeded.bmp'
secret_data = [1,1,1,1,0,0,0,0]
rgb_data = readColorImage(fnin)
red_data = getRgbLayer(rgb_data, rgb=RED)
embeded_red_data = embedBitReplace(red_data, secret_data, bit=1, interval=0)
#replace red_data to embeded_red_data
height = red_data.shape[0]
width = red_data.shape[1]
for i in np.arange(height):
for j in np.arange(width):
rgb_data[i][j][RED] = embeded_red_data[i][j]
writeImage(fnout, rgb_data)
Extract secret information from the LSB of the red layer in the time domain.
#coding: utf-8
from VideoDigitalWatermarking import *
fn_cover = 'test.bmp'
fn_stego = 'test_embeded.bmp'
rgb_cover = readColorImage(fn_cover)
rgb_stego = readColorImage(fn_stego)
red_cover = getRgbLayer(rgb_cover, rgb=RED)
red_stego = getRgbLayer(rgb_stego, rgb=RED)
secret_data = extractBitReplace(red_cover, red_stego, 8, bit=1, interval=0)
print(secret_data)
Read "test.bmp".
Read "test_embeded.bmp".
[ 1. 1. 1. 1. 0. 0. 0. 0.]
Change the any bit of the Y layer in the frequency domain by the bit replace method. We use the high bit to avoid the "quantization error".
#coding: utf-8
from VideoDigitalWatermarking import *
fnin = 'test.bmp'
fnout = 'test_embeded.bmp'
secret_data = [1,1,1,1,0,0,0,0]
rgb_data = readColorImage(fnin)
ycc_data = rgb2ycc(rgb_data)
y_data = get_y(ycc_data)
dct_data = dct_dim2(y_data)
embeded_dct_y_data = embedBitReplace(dct_data, secret_data, bit=5, interval=100)
embeded_y_data = idct_dim2(embeded_dct_y_data)
#replace y_data to embeded_y_data
height = ycc_data.shape[0]
width = ycc_data.shape[1]
for i in np.arange(height):
for j in np.arange(width):
ycc_data[i][j][0] = embeded_y_data[i][j]
embeded_rgb_data = ycc2rgb(ycc_data)
#print(rgb_data[0][0], embeded_rgb_data[0][0])
writeImage(fnout, embeded_rgb_data)
Extract secret information from the any bit of the Y layer in the frequency domain.
#coding: utf-8
from VideoDigitalWatermarking import *
fn_cover = 'test.bmp'
fn_stego = 'test_embeded.bmp'
rgb_cover = readColorImage(fn_cover)
rgb_stego = readColorImage(fn_stego)
#print(rgb_cover[0][0], rgb_stego[0][0])
ycc_cover = rgb2ycc(rgb_cover)
ycc_stego = rgb2ycc(rgb_stego)
y_cover = get_y(ycc_cover)
y_stego = get_y(ycc_stego)
dct_cover = dct_dim2(y_cover)
dct_stego = dct_dim2(y_stego)
#print(dct_cover[0][0], dct_stego[0][0])
secret_data = extractBitReplace(dct_cover, dct_stego, 8, bit=5, interval=100)
print(secret_data)
Read "test.bmp".
Read "test_embeded.bmp".
[ 1. 1. 1. 1. 0. 0. 0. 0.]
Embed secret information by spectrum spread using M-Sequence.
(But mow, you can only use τ=1. I am fixing this.)
#coding: utf-8
from VideoDigitalWatermarking import *
import numpy as np
import math
fnin = 'test.bmp'
fnout = 'test_embeded.bmp'
secret_data = [1,1,1,1,0,0,0]
secret_length = len(secret_data)
N = math.ceil(math.log2(secret_length+1))
m = generateM(N)
print('m =', m, '\n')
rgb_data = readColorImage(fnin)
red_data = getRgbLayer(rgb_data, rgb=RED)
embeded_red_data = embedMseq(red_data, secret_data, m, a=1, tau=1)
#replace red_data to embeded red_data
height = red_data.shape[0]
width = red_data.shape[1]
for i in np.arange(height):
for j in np.arange(width):
rgb_data[i][j][RED] = embeded_red_data[i][j]
writeImage(fnout, rgb_data)
#coding: utf-8
from VideoDigitalWatermarking import *
import numpy as np
import math
fnin = 'test.bmp'
fnout = 'test_embeded.bmp'
secret_data = [1,1,1,1,0,0,0]
secret_length = len(secret_data)
N = math.ceil(math.log2(secret_length+1))
m = generateM(N)
print('m =', m, '\n')
rgb_data = readColorImage(fnin)
ycc_data = rgb2ycc(rgb_data)
y_data = get_y(ycc_data)
dct_data = dct_dim2(y_data)
embeded_dct_y_data = embedMseq(dct_data, secret_data, m, a=100, tau=1)
embeded_y_data = idct_dim2(embeded_dct_y_data)
#replace y_data to embeded_y_data
height = ycc_data.shape[0]
width = ycc_data.shape[1]
for i in np.arange(height):
for j in np.arange(width):
ycc_data[i][j][0] = embeded_y_data[i][j]
embeded_rgb_data = ycc2rgb(ycc_data)
writeImage(fnout, embeded_rgb_data)
m = [1, 1, -1, 1, -1, -1, 1]
Read "test.bmp".
Write "test_embeded.bmp".
Extract secret information by spectrum spread using M-Sequence.
#coding: utf-8
from VideoDigitalWatermarking import *
import math
fn_cover = 'test.bmp'
fn_stego = 'test_embeded.bmp'
secret_length = 7 #secret infomation length
N = math.ceil(math.log2(secret_length+1))
m = generateM(N)
print('m =', m, '\n')
rgb_cover = readColorImage(fn_cover)
rgb_stego = readColorImage(fn_stego)
red_cover = getRgbLayer(rgb_cover, rgb=RED)
red_stego = getRgbLayer(rgb_stego, rgb=RED)
secret_data = extractMseq(red_cover, red_stego, secret_length, m, tau=1)
print(secret_data)
#coding: utf-8
from VideoDigitalWatermarking import *
import math
fn_cover = 'test.bmp'
fn_stego = 'test_embeded.bmp'
secret_length = 7 #secret infomation length
N = math.ceil(math.log2(secret_length+1))
m = generateM(N)
print('m =', m, '\n')
rgb_cover = readColorImage(fn_cover)
rgb_stego = readColorImage(fn_stego)
ycc_cover = rgb2ycc(rgb_cover)
ycc_stego = rgb2ycc(rgb_stego)
y_cover = get_y(ycc_cover)
y_stego = get_y(ycc_stego)
dct_cover = dct_dim2(y_cover)
dct_stego = dct_dim2(y_stego)
secret_data = extractMseq(dct_cover, dct_stego, secret_length, m, tau=1)
print(secret_data)
m = [1, 1, -1, 1, -1, -1, 1]
Read "test.bmp".
Read "test_embeded.bmp".
[1, 1, 1, 1, 0, 0, 0]
Embed and Extract secret information by spectrum spread using Complete Complementary Code.
#coding: utf-8
from VideoDigitalWatermarking import *
secret_data = [1,1,1,1,0,0,0,0]
secret_length = len(secret_data)
print('CCC')
ccc = generateCCC(2)
print(ccc, '\n')
#Embed
basic = createBasicSeq(ccc, secret_length, tau=1, ch=1)
print('basic = ', basic, '\n')
es = createEmbedSeq(basic, secret_data, a=1, tau=1)
print('Embed Sequence =', es, '\n')
#Extract
secret = extractCCC(ccc, es, secret_length, tau=1, ch=1)
print('secret =', secret)
CCC
[[[ 1. 1. -1. 1.]
[-1. 1. 1. 1.]]
[[ 1. 1. 1. -1.]
[-1. 1. -1. -1.]]]
basic = [1. 1. -1. 1. 0. 0. 0. 0. 0. 0. 0. -1. 1. 1. 1. 0. 0. 0. 0. 0. 0. 0.]
Embed Sequence = [ 1 2 1 2 0 -2 0 -2 -1 0 -1 -1 0 1 2 4 2 0 -2 -3 -2 -1 0 0 0 0 0 0 0]
secret = [1, 1, 1, 1, 0, 0, 0, 0]
You can input a video and output each frames as images.
#coding: utf-8
from VideoDigitalWatermarking import *
filename = 'test.mp4'
video2image(filename, n=5)
frame num = 5
fps = 30
hright = 1080
width = 1920
Export 5 jpeg Images.
You can input an image and divide it into blocks.
#coding: utf-8
from VideoDigitalWatermarking import *
filename = 'test.bmp'
image = readColorImage(filename)
blocks = colorimage2block(image, [128,128])
#print(blocks.shape)
for i in np.arange(blocks.shape[0]):
for j in np.arange(blocks.shape[1]):
writeImage(str(i*blocks.shape[1]+j+1) + '.bmp', blocks[i][j])
Calculate correlate function.
#coding: utf-8
from VideoDigitalWatermarking import *
#test
x=[1,-1,1]
y=[1,-1,1]
cycle = correlate(x, y, CYCLE)
noncylcle = correlate(x, y, NON_CYCLE)
print('CYCLE =', cycle)
print('NON CYCLE =', noncylcle)
CYCLE = [-1 -1 3 -1 -1]
NON CYCLE = [ 1 -2 3 -2 1]
Calculate Bit Error Rate.
#coding: utf-8
from VideoDigitalWatermarking import *
data1 = [1,0,1,0,1,0,1,0]
data2 = [1,1,1,1,0,0,0,0]
ber = calcBER(data1, data2)
print('BER =', ber, '[%]')
BER = 50.0 [%]
Calculate Peak Signal-to-Noise Ratio.
#coding: utf-8
from VideoDigitalWatermarking import *
import numpy as np
a = np.array([[[11,10,10],[20,20,20],[30,30,30]],[[10,10,10],[20,20,20],[30,30,30]],[[10,10,10],[20,20,20],[30,30,30]]])
b = np.array([[[10,10,10],[20,20,20],[30,30,30]],[[10,10,10],[20,20,20],[30,30,30]],[[10,10,10],[20,20,20],[30,30,30]]])
c = np.array([[[10,10,10],[20,20,20],[30,30,30]],[[10,10,10],[20,20,20],[30,30,30]],[[10,10,10],[20,20,20],[30,30,30]]])
psnr = calcPSNR(a, b)
print('PSNR =', psnr, '[dB]')
psnr = calcPSNR(b, c)
print('PSNR =', psnr, '[dB]')
PSNR = 38.37903944592942 [dB]
PSNR = -inf [dB]
This program is using OpenCV for input and output.
Please read here about OpenCV license.
Sphinx is a documentation tool for Python.
HTML documents in this library were created using Sphinx.
2017/11/08