CN109829475A - A kind of dark watermark handling method of image based on deep learning and device - Google Patents
A kind of dark watermark handling method of image based on deep learning and device Download PDFInfo
- Publication number
- CN109829475A CN109829475A CN201811610063.3A CN201811610063A CN109829475A CN 109829475 A CN109829475 A CN 109829475A CN 201811610063 A CN201811610063 A CN 201811610063A CN 109829475 A CN109829475 A CN 109829475A
- Authority
- CN
- China
- Prior art keywords
- watermark
- information
- point
- image
- dot
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
- 238000000034 method Methods 0.000 title claims abstract description 44
- 238000013135 deep learning Methods 0.000 title claims abstract description 33
- 239000011159 matrix material Substances 0.000 claims abstract description 100
- 238000012549 training Methods 0.000 claims abstract description 55
- 230000006835 compression Effects 0.000 claims abstract description 11
- 238000007906 compression Methods 0.000 claims abstract description 11
- 238000010801 machine learning Methods 0.000 claims abstract description 9
- 238000013145 classification model Methods 0.000 claims description 60
- 238000012795 verification Methods 0.000 claims description 31
- 230000004044 response Effects 0.000 claims description 28
- 238000004891 communication Methods 0.000 claims description 18
- 238000000605 extraction Methods 0.000 claims description 16
- 238000003672 processing method Methods 0.000 claims description 16
- 230000008859 change Effects 0.000 claims description 8
- 238000012545 processing Methods 0.000 claims description 8
- 230000005540 biological transmission Effects 0.000 claims description 4
- 238000013507 mapping Methods 0.000 claims description 3
- 230000000295 complement effect Effects 0.000 claims 1
- 230000008569 process Effects 0.000 description 11
- 238000010586 diagram Methods 0.000 description 9
- 238000013527 convolutional neural network Methods 0.000 description 4
- 238000012937 correction Methods 0.000 description 4
- 238000004590 computer program Methods 0.000 description 3
- 238000013136 deep learning model Methods 0.000 description 3
- 238000001514 detection method Methods 0.000 description 3
- 230000000694 effects Effects 0.000 description 2
- 230000003287 optical effect Effects 0.000 description 2
- 238000013519 translation Methods 0.000 description 2
- 230000009471 action Effects 0.000 description 1
- 230000000052 comparative effect Effects 0.000 description 1
- 238000010276 construction Methods 0.000 description 1
- 238000009499 grossing Methods 0.000 description 1
- 238000002372 labelling Methods 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
- 238000012360 testing method Methods 0.000 description 1
- 230000009466 transformation Effects 0.000 description 1
Landscapes
- Image Analysis (AREA)
- Editing Of Facsimile Originals (AREA)
- Image Processing (AREA)
Abstract
Embodiment of the invention discloses a kind of dark watermark handling method of image based on deep learning and device, trained in advance for the disaggregated model identified in image with the presence or absence of mark point.When identification is added to the Target Photo of watermark dot matrix, is classified in the point image of each point with the presence or absence of mark point using disaggregated model, obtain watermark dot matrix by the input results of disaggregated model, then restore the source information of picture by watermark dot matrix.Disaggregated model is trained by carrying out machine learning to a large amount of training sample, each point is judged by disaggregated model with the presence or absence of mark point, accuracy is high.This method solve the rules artificially formulated to be difficult to the problem of correctly identifying after rich and varied compression, improves identifiability and recognition accuracy, improves the success rate of watermark information reduction.
Description
Technical Field
The embodiment of the invention relates to the technical field of image watermarking processing, in particular to a dark image watermarking processing method and device based on deep learning.
Background
The watermark is generally used for identifying the origin of a certain picture or for declaring copyright, and source information of the picture can be superposed in an original picture in a dot matrix form to obtain a watermark picture. When the dot matrix is used as a watermark template, in order to ensure the imperceptibility of the dot matrix watermark, the dot of the dot matrix needs to be smaller, and the color is closer to the background. However, in an actual scene, many communication tools or screenshot tools compress an image more severely, and although the difference between the image subjected to gaussian smoothing or cosine transformation and the image before compression is small under the observation of human eyes, the local characteristics of the image are changed greatly, many points in a dot matrix become blurred, so that the dot detection rule fails, watermark information written in the dot matrix cannot be extracted, and great challenges are brought to tracking the source of an leaked image.
In the application process, the existing watermark image added in a dot matrix form of the inventor is easy to cause that watermark information cannot be extracted due to picture compression, and the watermark identification rate is low.
Disclosure of Invention
The invention aims to solve the problems that the existing watermark image added in a dot matrix form is easy to cause that watermark information cannot be extracted due to picture compression, and the watermark identification rate is low.
In view of the above technical problems, an embodiment of the present invention provides a dark watermark processing method based on deep learning, including:
acquiring a target picture to be subjected to watermark information extraction and positioning point coordinates of a first watermark dot matrix superposed in the target picture, and determining point position coordinates corresponding to each first point position in the first watermark dot matrix in the target picture according to the positioning point coordinates of the first watermark dot matrix;
for each first point location, acquiring a first point location image which is captured in the target picture by taking a point location coordinate corresponding to the first point location as a center, taking the first point location image as an input parameter of a classification model obtained through deep learning, and acquiring an output result of whether a mark point exists in the first point location image output by the classification model;
and obtaining the first watermark lattice according to the output result corresponding to each first dot, obtaining a first lattice sequence by the first watermark lattice, and restoring source information by the first lattice sequence, wherein the source information is the watermark information superposed in the target picture and is output.
The embodiment of the invention provides an image dark watermark processing device based on deep learning, which comprises:
the coordinate determination module is used for acquiring a target picture to be subjected to watermark information extraction and the locating point coordinates of a first watermark dot matrix superposed in the target picture, and determining point location coordinates corresponding to each first point in the first watermark dot matrix in the target picture according to the locating point coordinates of the first watermark dot matrix;
the image identification module is used for acquiring a first point image which is intercepted in the target picture by taking a point coordinate corresponding to the first point as a center for each first point, taking the first point image as an input parameter of a classification model obtained through deep learning, and acquiring an output result of whether a mark point exists in the first point image output by the classification model;
and the information restoration module is used for obtaining the first watermark lattice according to the output result corresponding to each first dot, obtaining a first lattice sequence by the first watermark lattice, and restoring source information by the first lattice sequence, wherein the source information is watermark information superposed in the target picture and is output.
An embodiment of the present invention provides an electronic device, including:
at least one processor, at least one memory, a communication interface, and a bus; wherein,
the processor, the memory and the communication interface complete mutual communication through the bus;
the communication interface is used for information transmission between the electronic equipment and communication equipment of other electronic equipment;
the memory stores program instructions executable by the processor, which when called by the processor are capable of performing the methods described above.
The present embodiments provide a non-transitory computer-readable storage medium storing computer instructions that cause the computer to perform the method described above.
The embodiment of the invention provides an image dark watermark processing method and device based on deep learning, and a classification model for identifying whether mark points exist in an image is trained in advance. When the target picture added with the watermark dot matrix is identified, whether the mark points exist in the dot image of each point position is classified by adopting a classification model, the watermark dot matrix is obtained according to the input result of the classification model, and then the source information of the picture is restored by the watermark dot matrix. A classification model is trained by machine learning on a large number of training samples, and whether mark points exist at each point or not is judged by the classification model, so that the accuracy is high. The method solves the problem that the artificially established rules are difficult to correctly identify after abundant and diversified compression, improves identifiability and identification accuracy, and improves the success rate of watermark information restoration.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, and it is obvious that the drawings in the following description are some embodiments of the present invention, and those skilled in the art can also obtain other drawings according to the drawings without creative efforts.
Fig. 1 is a schematic diagram of a process for generating a dot matrix watermark image for comparison according to an embodiment of the present invention;
fig. 2 is a schematic diagram of a comparative dot matrix watermark information extraction process provided by another embodiment of the present invention;
fig. 3 is a schematic flowchart of watermark information extraction in a dark image watermarking processing method based on deep learning according to another embodiment of the present invention;
fig. 4 is a schematic flowchart of adding watermark information in a dark image watermarking processing method based on deep learning according to another embodiment of the present invention;
FIG. 5 is a schematic view of a dot matrix dark watermark image generation process provided by another embodiment of the present invention;
FIG. 6 is a schematic flow chart of classification model training according to another embodiment of the present invention;
fig. 7 is a schematic flowchart of watermark information extraction provided by another embodiment of the present invention;
fig. 8 is a block diagram of a dark image watermarking processing apparatus based on deep learning according to another embodiment of the present invention;
fig. 9 is a block diagram of an electronic device according to another embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present invention clearer, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, but not all, embodiments of the present invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
Before the dark watermark processing method provided by the present invention is introduced, a dark watermark processing method for comparison is introduced, fig. 1 is a schematic diagram of a process of generating a dot matrix watermark image for comparison provided in this embodiment, and fig. 2 is a schematic diagram of a process of extracting dot matrix watermark information for comparison provided in this embodiment. Referring to fig. 1, in the comparison scheme, the current user and terminal information is converted into a lattice sequence as a watermark template in advance. When the image is sent out, the watermark template is adjusted to be in a proper size and the color close to the background color of the original image is superposed on the original image, and the superposed image is used for finishing sending out or storing operation.
Referring to fig. 2, in the comparison scheme, when the image source needs to be proved, the position of the detection point needs to be detected based on the color correlation rule under the manual assistance, and the sending source information written in the dot matrix needs to be restored according to the dot matrix rule.
It can be seen that the process of extracting the watermark information is based on the positions of the detection points of the color correlation rules, the points are required to have good identification, but the identification of the points is reduced in the process of image transmission and compression, so that the watermark dot matrix cannot be identified.
In order to solve the problem in the contrast scheme, fig. 3 is a schematic flowchart of watermark information extraction in the dark image watermarking processing method based on deep learning according to this embodiment, and referring to fig. 3, the method includes:
301: acquiring a target picture to be subjected to watermark information extraction and positioning point coordinates of a first watermark dot matrix superposed in the target picture, and determining point position coordinates corresponding to each first point position in the first watermark dot matrix in the target picture according to the positioning point coordinates of the first watermark dot matrix;
302: for each first point location, acquiring a first point location image which is captured in the target picture by taking a point location coordinate corresponding to the first point location as a center, taking the first point location image as an input parameter of a classification model obtained through deep learning, and acquiring an output result of whether a mark point exists in the first point location image output by the classification model;
303: and obtaining the first watermark lattice according to the output result corresponding to each first dot, obtaining a first lattice sequence by the first watermark lattice, and restoring source information by the first lattice sequence, wherein the source information is the watermark information superposed in the target picture and is output.
The method provided by the embodiment is executed by a computer or a device special for watermarking the picture and extracting the watermark information. Step 301-. The positioning points in the watermark dot matrix are positioned at the upper left position, the upper right position and the lower left position of the dot matrix, and the points at the three positions are large, so that the coordinates of the three points are convenient to identify. When the watermark information is extracted, the coordinates of the three positioning points are input, the position of each point in the watermark lattice is determined by the coordinates of the three positioning points, so that a point image corresponding to each point is generated, whether a mark point exists in the point image is judged by the classification model, the watermark lattice is restored by the output result of the classification model, and then the source information of the image is restored by the watermark lattice. For example, after a confidential sensitive image leaks, the image added with the watermark can be tracked and traced to the source and positioned to be responsible by the method.
The classification model is obtained by machine learning, for example, training a convolutional neural network through a large number of training models. Compared with the marking points on each point of the human eyes, the classification model has higher accuracy.
The embodiment provides an image dark watermarking processing method based on deep learning, and a classification model for identifying whether a mark point exists in an image is trained in advance. When the target picture added with the watermark dot matrix is identified, whether the mark points exist in the dot image of each point position is classified by adopting a classification model, the watermark dot matrix is obtained according to the input result of the classification model, and then the source information of the picture is restored by the watermark dot matrix. A classification model is trained by machine learning on a large number of training samples, and whether mark points exist at each point or not is judged by the classification model, so that the accuracy is high. The method solves the problem that the artificially established rules are difficult to correctly identify after abundant and diversified compression, improves identifiability and identification accuracy, and improves the success rate of watermark information restoration.
In general, the watermark information extraction stage includes the following steps: step 1: manually inputting coordinates of three positioning points in the dot matrix; step 2: predicting all point locations by using the trained classification model to obtain a predicted dot matrix sequence; and step 3: complementing the dot matrix sequence with positioning information, version information and time sequence information to restore the dot matrix sequence into a quick response code and identifying the decimal number written in the quick response code; and 4, step 4: and (4) transcoding the decimal number obtained in the step (3) to obtain user information, equipment information and a timestamp when the screenshot occurs.
The method adopts the pre-trained deep learning model in the watermark information extraction stage, and greatly improves the success rate of watermark information restoration.
Further, on the basis of the foregoing embodiments, the training of the classification model includes:
acquiring a plurality of pictures superimposed with known second watermark dot matrixes as sample pictures, and determining the corresponding point position coordinates of each second point position in the second watermark dot matrixes in the sample pictures according to the positioning point coordinates of the second watermark dot matrixes superimposed on the sample pictures for each sample picture;
for each second point location, taking the point location coordinate corresponding to the second point location as a center to intercept a second point location image in the sample picture, moving the second point location within a preset range to obtain a new third point location, and taking the point location coordinate corresponding to the third point location as a center to intercept a third point location image in the target picture;
and taking the second dot image or the third dot image as a training input image, taking a result of whether the marking points exist in the training input image determined by the second watermark dot matrix as expected output to obtain training samples, obtaining a set consisting of the training samples obtained from all sample images as a training sample set, and performing machine learning on all the training samples in the training sample set to obtain the classification model.
The second point image is a screenshot centered on a point location coordinate corresponding to the second point location, and the third point image is a screenshot centered on a point location coordinate corresponding to the third point location after the second point location moves within a preset range. For example, the preset range is an area surrounded by boundaries of 3 pixels in which the second point locations are moved upward, downward, leftward and rightward, respectively, and then within the preset range of each second point location, 49 dot bit images, that is, one second dot bit image and 48 third dot bit images, can be obtained by moving the second point location. And (3) forming a group of training samples by each point position image and the result of whether the mark point exists on the point position, and obtaining a classification model by performing machine learning on a training sample set consisting of the training samples.
In the practical application process, even if the position image deviation is caused by slight deviation of the coordinates of the positioning points or the position image deviation of the intercepted points, whether the marking points exist in the positioning images or not can be accurately judged through the classification model.
The embodiment provides an image dark watermark processing method based on deep learning, which can correctly identify whether an image exists in a point or not under the condition that an intercepted point image has deviation by moving a training sample obtained by the point location, thereby ensuring the comprehensiveness of the training sample.
Further, on the basis of the above embodiments, the method further includes:
taking the second dot image or the third dot image which is not taken as a training sample as a verification input image, taking a result of whether mark points exist in the verification input image determined by the second watermark lattice as expected output to obtain a verification sample, and acquiring a set consisting of a plurality of verification samples as a verification sample set;
and calculating the accuracy of the output result of classifying each verification sample in the verification sample set by the classification model, and if the accuracy is less than the preset accuracy, continuing to train the classification model by using the newly generated training sample.
Further, calculating a correctness of an output result of classifying each verification sample in the set of verification samples by the classification model includes:
and calculating the ratio of the number of the verification samples with the same output result as the corresponding expected output and the total number of the verification samples in the verification sample set, which are classified by the classification model, to each verification sample in the verification sample set, and taking the ratio as the accuracy.
In general, the deep learning model training phase includes the following steps: step 1: collecting a large number of compressed screenshot images as a data set, and labeling coordinates of three positioning points of each screenshot image; step 2: calculating coordinates of all point locations in the dot matrix according to the coordinates of the positioning points of each image, and taking the coordinates of all the point locations and the coordinates thereof translated in a smaller range as centers, intercepting the images with fixed sizes to form a training data set and a verification data set; and step 3: and (3) training the data set obtained in the step (2) by using a convolutional neural network to obtain a classification model for judging whether the local area is dotted.
Further, on the basis of the foregoing embodiments, the obtaining a plurality of pictures superimposed with a known second watermark lattice as sample pictures includes:
and compressing the pictures on which the known second watermark lattices are superimposed, and taking the pictures after compression as the sample pictures.
In practice, most of the pictures used for extracting the watermark information are compressed, so that when a training sample is generated, in order to improve the accuracy of the classification model, the compressed pictures need to be selected as sample pictures.
In the embodiment, a deep learning method is adopted, a large-scale and abundant data set is combined, the problem that the artificially established rule is difficult to correctly identify after abundant and diversified compression is solved, and in addition, the negative influence on the prediction result when the artificially appointed positioning point has deviation can be prevented by performing small-range translation on all point positions.
Further, on the basis of the foregoing embodiments, the obtaining the first watermark dot matrix according to the output result corresponding to each first dot, obtaining a first dot sequence from the first watermark dot, and restoring source information from the first dot sequence, where the source information is watermark information superimposed in the target picture, and outputting the source information includes:
obtaining the first watermark dot matrix according to the output result corresponding to each first dot, obtaining a first dot matrix sequence by the first watermark dot matrix, restoring a quick response code according to the first dot matrix sequence and the positioning information, the version information and the time sequence information of the supplemented quick response code, identifying decimal numbers in the quick response code, decoding the obtained decimal numbers to obtain source information of the target picture, wherein the source information is the watermark information superposed in the target picture, and outputting the source information;
the source information comprises user information, equipment information and a timestamp for sending the target picture.
The positioning information, the version information and the time sequence information are attribute information of the quick response code. For example, when encoding a watermark lattice, 1 and 0 represent dotted and non-dotted, respectively. Then, after the first watermark dot matrix is obtained, the position with the mark point is recorded as 1, and the position without the mark point is recorded as 0, so as to obtain a first dot matrix sequence. The first dot matrix sequence corresponds to information on the two-dimensional code data bit and the error correction bit, a two-dimensional code is obtained after the positioning information, the version information and the time sequence information are added in the first dot matrix sequence, the decimal number is identified by the two-dimensional code, and then the decimal number is decrypted to obtain the user information, the equipment information and the timestamp.
The embodiment provides an image dark watermark processing method based on deep learning, which is used for decoding and restoring source information step by step according to an obtained watermark lattice and a method for encoding the watermark lattice according to the source information, so as to extract the watermark information.
Further, fig. 4 is a schematic flow chart of adding watermark information in the dark image watermarking processing method based on deep learning provided in this embodiment, and referring to fig. 4, on the basis of the above embodiment, the method further includes:
401: before an original picture is sent out or screenshot is carried out on the original picture, current user information, equipment information and a timestamp are obtained, the obtained user information, equipment information and timestamp are transcoded into decimal numbers, a quick response code is generated according to the transcoded decimal numbers, and a dot matrix sequence consisting of binary codes is extracted from the generated quick response code and is used as the first dot matrix sequence;
402: converting the first dot array sequence into a dot array consisting of mark points as a first watermark dot array according to the mapping relation between the binary code and the mark points;
403: and superposing the first watermark dot matrix to the original picture to obtain the target picture, and sending out the target picture or carrying out screenshot on the target picture.
The embodiment provides an image dark watermarking processing method based on deep learning, which is used for obtaining a target image added with a watermark by superposing a watermark lattice generated by user information, equipment information and a time stamp of the image and an original image.
Further, on the basis of the foregoing embodiments, the superimposing the first watermark dot matrix onto the original picture to obtain the target picture includes:
and identifying an area with the gray value change rate smaller than a preset threshold value from the original picture as a target area, and overlaying the first watermark lattice to the target area to obtain the target picture.
And identifying the area with the gray value change rate smaller than the preset threshold value from the original picture through corresponding software.
In general, the watermark information encoding stage includes the following steps, step 1: encoding the current user information, device information, and timestamp to a 24 digit decimal number; step 2: coding the decimal number obtained in the step 1 according to a quick response code; and step 3: removing the positioning information, version information and time sequence information of the quick response code obtained in the step 2, and recoding the quick response code into a sequence only consisting of 0 and 1; and 4, step 4: respectively representing 0 and 1 by using a dot-free point and a dot point, converting the sequence obtained in the step 3 into a dot matrix form, and setting positioning points at the upper left position, the upper right position and the lower left position of the dot matrix to complete the construction of the watermark template; and 5: and (4) identifying a target area in the image by using the gradient change of the gray level, superposing the watermark template obtained in the step (4) on the target area to obtain a final image, and using the image to finish the outgoing or storage action of the screenshot.
The invention uses the gradient change of the gray scale to identify the target area, thereby obtaining the target area with non-drastic change of the gray scale value, and ensuring the imperceptibility of the dot matrix watermark superposed in the target area. Meanwhile, the quick response code is used as an information encoding mode, and the possibility that the watermark information is identified under the condition of point identification errors is enhanced by using the error correction capability of the quick response code.
In order to describe the generation phase of the image with the dot matrix dark watermark, the training phase of the classification model for judging whether the local part has a dot, and the extraction phase of the watermark information in the image with the dot matrix dark watermark in more detail, fig. 5 is a schematic diagram of a generation process of the dot matrix dark watermark provided by this embodiment, and referring to fig. 5, the process can be summarized as the following steps:
step 1: acquiring an original image which is sent out or captured;
step 2: transcoding the current user information, terminal information and time information into decimal numbers, converting the decimal numbers into quick response code images, converting data bits and error correction bits of the decimal numbers into sequences only consisting of 0 and 1, and converting 0 and 1 in the sequences into dot matrix sequences consisting of no dots and dots respectively to serve as watermark templates;
and step 3: identifying a target region in the original image using a gradient change of the gray level;
and 4, step 4: superposing the watermark template generated in the step 2 and a target area in the image to obtain a final image;
and 5: and (4) finishing outgoing or screenshot operation by using the image in the step (4).
Fig. 6 is a schematic flowchart of the classification model training provided in this embodiment, and referring to fig. 6, the process may be summarized as the following steps:
step 1: using a dot matrix of a fixed sequence as a watermark template and generating a large number of compressed screenshot images as an original data set;
step 2: marking coordinates of three positioning points in each image in the original data set;
and step 3: calculating coordinates of all point positions according to coordinates of the positioning points of each picture, taking the coordinates and coordinates after translation in a small range as centers, intercepting the image as a final data set, and dividing the final data set into a training set and a verification set;
and 4, step 4: building a convolutional neural network model;
and 5: adjusting parameters and training a model;
step 6: verifying the model effect on the verification set, and if the model effect is not ideal enough, returning to the step 5;
and 7: and saving the model.
Fig. 7 is a schematic flowchart of extracting watermark information provided in this embodiment, and referring to fig. 7, this flowchart may be summarized as the following steps:
step 1: manually marking coordinates of three positioning points in the image;
step 2: inputting the coordinates of the positioning points into a pre-trained deep learning model to obtain a predicted lattice sequence;
and step 3: transcoding the non-point and point points in the dot matrix sequence into 0 and 1 respectively to obtain data bits and error correction bits of the quick response code, and completing positioning information, version information and time sequence information of the quick response code to obtain a complete quick response code;
and 4, step 4: extracting decimal number information in the quick response code image;
and 5: and transcoding the obtained decimal sequence into the user, equipment and time information of the sending source.
The embodiment trains a classification model for whether a local area has points by using a convolutional neural network, can basically and accurately judge the points in the image compressed by various communication tools and screenshot tools through tests, can successfully restore watermark information in the image according to various complex conditions, and solves the problem of identifying dot matrix watermarks in the compressed image.
Fig. 8 is a block diagram illustrating a deep learning based image dark watermark processing apparatus according to an embodiment of the present invention, and referring to fig. 8, the deep learning based image dark watermark processing apparatus according to the embodiment includes a coordinate determination module 801, an image identification module 802, and an information restoration module 803, wherein,
a coordinate determining module 801, configured to obtain a target picture to be subjected to watermark information extraction and locating point coordinates of a first watermark dot matrix superimposed in the target picture, and determine, according to the locating point coordinates of the first watermark dot matrix, point location coordinates corresponding to each first point in the first watermark dot matrix in the target picture;
an image identification module 802, configured to obtain, for each first point location, a first point location image captured in the target picture with a point location coordinate corresponding to the first point location as a center, use the first point location image as an input parameter of a classification model obtained through deep learning, and obtain an output result of whether a mark point exists in the first point location image output by the classification model;
the information restoring module 803 is configured to obtain the first watermark dot matrix according to an output result corresponding to each first dot, obtain a first dot matrix sequence from the first watermark dot matrix, and restore source information from the first dot matrix sequence, where the source information is watermark information superimposed in the target picture and is output.
The image dark watermark processing apparatus based on deep learning provided in this embodiment is suitable for the image dark watermark processing method based on deep learning provided in the foregoing embodiment, and details are not repeated here.
The embodiment of the invention provides an image dark watermark processing device based on deep learning, which is used for training a classification model for identifying whether mark points exist in an image in advance. When the target picture added with the watermark dot matrix is identified, whether the mark points exist in the dot image of each point position is classified by adopting a classification model, the watermark dot matrix is obtained according to the input result of the classification model, and then the source information of the picture is restored by the watermark dot matrix. A classification model is trained by machine learning on a large number of training samples, and whether mark points exist at each point or not is judged by the classification model, so that the accuracy is high. The method solves the problem that the artificially established rules are difficult to correctly identify after abundant and diversified compression, improves identifiability and identification accuracy, and improves the success rate of watermark information restoration.
Fig. 9 is a block diagram showing the structure of the electronic apparatus provided in the present embodiment.
Referring to fig. 9, the electronic device includes: a processor (processor)901, a memory (memory)902, a communication Interface (Communications Interface)903, and a bus 904;
wherein,
the processor 901, the memory 902 and the communication interface 903 complete mutual communication through the bus 904;
the communication interface 903 is used for information transmission between the electronic device and communication devices of other electronic devices;
the processor 901 is configured to call program instructions in the memory 902 to perform the methods provided by the above-mentioned method embodiments, for example, including: acquiring a target picture to be subjected to watermark information extraction and positioning point coordinates of a first watermark dot matrix superposed in the target picture, and determining point position coordinates corresponding to each first point position in the first watermark dot matrix in the target picture according to the positioning point coordinates of the first watermark dot matrix; for each first point location, acquiring a first point location image which is captured in the target picture by taking a point location coordinate corresponding to the first point location as a center, taking the first point location image as an input parameter of a classification model obtained through deep learning, and acquiring an output result of whether a mark point exists in the first point location image output by the classification model; and obtaining the first watermark lattice according to the output result corresponding to each first dot, obtaining a first lattice sequence by the first watermark lattice, and restoring source information by the first lattice sequence, wherein the source information is the watermark information superposed in the target picture and is output.
The present embodiments provide a non-transitory computer-readable storage medium storing computer instructions that cause the computer to perform the methods provided by the above method embodiments, for example, including: acquiring a target picture to be subjected to watermark information extraction and positioning point coordinates of a first watermark dot matrix superposed in the target picture, and determining point position coordinates corresponding to each first point position in the first watermark dot matrix in the target picture according to the positioning point coordinates of the first watermark dot matrix; for each first point location, acquiring a first point location image which is captured in the target picture by taking a point location coordinate corresponding to the first point location as a center, taking the first point location image as an input parameter of a classification model obtained through deep learning, and acquiring an output result of whether a mark point exists in the first point location image output by the classification model; and obtaining the first watermark lattice according to the output result corresponding to each first dot, obtaining a first lattice sequence by the first watermark lattice, and restoring source information by the first lattice sequence, wherein the source information is the watermark information superposed in the target picture and is output.
The present embodiments disclose a computer program product comprising a computer program stored on a non-transitory computer readable storage medium, the computer program comprising program instructions which, when executed by a computer, enable the computer to perform the methods provided by the above-described method embodiments, for example, comprising: acquiring a target picture to be subjected to watermark information extraction and positioning point coordinates of a first watermark dot matrix superposed in the target picture, and determining point position coordinates corresponding to each first point position in the first watermark dot matrix in the target picture according to the positioning point coordinates of the first watermark dot matrix; for each first point location, acquiring a first point location image which is captured in the target picture by taking a point location coordinate corresponding to the first point location as a center, taking the first point location image as an input parameter of a classification model obtained through deep learning, and acquiring an output result of whether a mark point exists in the first point location image output by the classification model; and obtaining the first watermark lattice according to the output result corresponding to each first dot, obtaining a first lattice sequence by the first watermark lattice, and restoring source information by the first lattice sequence, wherein the source information is the watermark information superposed in the target picture and is output.
Those of ordinary skill in the art will understand that: all or part of the steps for implementing the method embodiments may be implemented by hardware related to program instructions, and the program may be stored in a computer readable storage medium, and when executed, the program performs the steps including the method embodiments; and the aforementioned storage medium includes: various media that can store program codes, such as ROM, RAM, magnetic or optical disks.
The above-described embodiments of the electronic device and the like are merely illustrative, where the units described as separate parts may or may not be physically separate, and the parts displayed as units may or may not be physical units, may be located in one place, or may also be distributed on multiple network units. Some or all of the modules may be selected according to actual needs to achieve the purpose of the solution of the present embodiment. One of ordinary skill in the art can understand and implement it without inventive effort.
Through the above description of the embodiments, those skilled in the art will clearly understand that each embodiment can be implemented by software plus a necessary general hardware platform, and certainly can also be implemented by hardware. With this understanding in mind, the above-described technical solutions may be embodied in the form of a software product, which can be stored in a computer-readable storage medium such as ROM/RAM, magnetic disk, optical disk, etc., and includes instructions for causing a computer device (which may be a personal computer, a server, or a network device, etc.) to execute the methods described in the embodiments or some parts of the embodiments.
Finally, it should be noted that: the above embodiments are only used for illustrating the technical solutions of the embodiments of the present invention, and are not limited thereto; although embodiments of the present invention have been described in detail with reference to the foregoing embodiments, those skilled in the art will understand that: the technical solutions described in the foregoing embodiments may still be modified, or some or all of the technical features may be equivalently replaced; and the modifications or the substitutions do not make the essence of the corresponding technical solutions depart from the scope of the technical solutions of the embodiments of the present invention.
Claims (16)
1. An image dark watermark processing method based on deep learning is characterized by comprising the following steps:
acquiring a target picture to be subjected to watermark information extraction and positioning point coordinates of a first watermark dot matrix superposed in the target picture, and determining point position coordinates corresponding to each first point position in the first watermark dot matrix in the target picture according to the positioning point coordinates of the first watermark dot matrix;
for each first point location, acquiring a first point location image which is captured in the target picture by taking a point location coordinate corresponding to the first point location as a center, taking the first point location image as an input parameter of a classification model obtained through deep learning, and acquiring an output result of whether a mark point exists in the first point location image output by the classification model;
and obtaining the first watermark lattice according to the output result corresponding to each first dot, obtaining a first lattice sequence by the first watermark lattice, and restoring source information by the first lattice sequence, wherein the source information is the watermark information superposed in the target picture and is output.
2. The method of claim 1, wherein the training of the classification model comprises:
acquiring a plurality of pictures superimposed with known second watermark dot matrixes as sample pictures, and determining the corresponding point position coordinates of each second point position in the second watermark dot matrixes in the sample pictures according to the positioning point coordinates of the second watermark dot matrixes superimposed on the sample pictures for each sample picture;
for each second point location, taking the point location coordinate corresponding to the second point location as a center to intercept a second point location image in the sample picture, moving the second point location within a preset range to obtain a new third point location, and taking the point location coordinate corresponding to the third point location as a center to intercept a third point location image in the target picture;
and taking the second dot image or the third dot image as a training input image, taking a result of whether the marking points exist in the training input image determined by the second watermark dot matrix as expected output to obtain training samples, obtaining a set consisting of the training samples obtained from all sample images as a training sample set, and performing machine learning on all the training samples in the training sample set to obtain the classification model.
3. The method of claim 2, further comprising:
taking the second dot image or the third dot image which is not taken as a training sample as a verification input image, taking a result of whether mark points exist in the verification input image determined by the second watermark lattice as expected output to obtain a verification sample, and acquiring a set consisting of a plurality of verification samples as a verification sample set;
and calculating the accuracy of the output result of classifying each verification sample in the verification sample set by the classification model, and if the accuracy is less than the preset accuracy, continuing to train the classification model by using the newly generated training sample.
4. The method according to claim 2, wherein the obtaining, as the sample picture, a plurality of pictures superimposed with the known second watermark lattice comprises:
and compressing the pictures on which the known second watermark lattices are superimposed, and taking the pictures after compression as the sample pictures.
5. The method according to claim 1, wherein the obtaining the first watermark lattice according to the output result corresponding to each first bit, obtaining a first lattice sequence from the first watermark lattice, and recovering source information from the first lattice sequence, where the source information is watermark information superimposed on the target picture, and outputting the source information includes:
obtaining the first watermark dot matrix according to the output result corresponding to each first dot, obtaining a first dot matrix sequence by the first watermark dot matrix, restoring a quick response code according to the first dot matrix sequence and the positioning information, the version information and the time sequence information of the supplemented quick response code, identifying decimal numbers in the quick response code, decoding the obtained decimal numbers to obtain source information of the target picture, wherein the source information is the watermark information superposed in the target picture, and outputting the source information;
the source information comprises user information, equipment information and a timestamp for sending the target picture.
6. The method of claim 1, further comprising:
before an original picture is sent out or screenshot is carried out on the original picture, current user information, equipment information and a timestamp are obtained, the obtained user information, equipment information and timestamp are transcoded into decimal numbers, a quick response code is generated according to the transcoded decimal numbers, and a dot matrix sequence consisting of binary codes is extracted from the generated quick response code and is used as the first dot matrix sequence;
converting the first dot array sequence into a dot array consisting of mark points as a first watermark dot array according to the mapping relation between the binary code and the mark points;
and superposing the first watermark dot matrix to the original picture to obtain the target picture, and sending out the target picture or carrying out screenshot on the target picture.
7. The method according to claim 6, wherein superimposing the first watermark lattice on the original picture to obtain the target picture comprises:
and identifying an area with the gray value change rate smaller than a preset threshold value from the original picture as a target area, and overlaying the first watermark lattice to the target area to obtain the target picture.
8. An image dark watermark processing device based on deep learning, characterized by comprising:
the coordinate determination module is used for acquiring a target picture to be subjected to watermark information extraction and the locating point coordinates of a first watermark dot matrix superposed in the target picture, and determining point location coordinates corresponding to each first point in the first watermark dot matrix in the target picture according to the locating point coordinates of the first watermark dot matrix;
the image identification module is used for acquiring a first point image which is intercepted in the target picture by taking a point coordinate corresponding to the first point as a center for each first point, taking the first point image as an input parameter of a classification model obtained through deep learning, and acquiring an output result of whether a mark point exists in the first point image output by the classification model;
and the information restoration module is used for obtaining the first watermark lattice according to the output result corresponding to each first dot, obtaining a first lattice sequence by the first watermark lattice, and restoring source information by the first lattice sequence, wherein the source information is watermark information superposed in the target picture and is output.
9. The apparatus of claim 8, further comprising a model training module for training the classification model;
the model training module is used for acquiring a plurality of pictures on which known second watermark dot matrixes are superimposed as sample pictures, and for each sample picture, determining point position coordinates corresponding to each second point position in the second watermark dot matrixes in the sample pictures according to the positioning point coordinates of the second watermark dot matrixes superimposed on the sample pictures; for each second point location, taking the point location coordinate corresponding to the second point location as a center to intercept a second point location image in the sample picture, moving the second point location within a preset range to obtain a new third point location, and taking the point location coordinate corresponding to the third point location as a center to intercept a third point location image in the target picture; and taking the second dot image or the third dot image as a training input image, taking a result of whether the marking points exist in the training input image determined by the second watermark dot matrix as expected output to obtain training samples, obtaining a set consisting of the training samples obtained from all sample images as a training sample set, and performing machine learning on all the training samples in the training sample set to obtain the classification model.
10. The apparatus according to claim 9, wherein the model training module is further configured to use the second dot image or the third dot image that is not used as a training sample as a verification input image, use a result of whether there is a mark point in the verification input image determined by the second watermark lattice as an expected output, obtain a verification sample, and obtain a set of several verification samples as a verification sample set; and calculating the accuracy of the output result of classifying each verification sample in the verification sample set by the classification model, and if the accuracy is less than the preset accuracy, continuing to train the classification model by using the newly generated training sample.
11. The apparatus of claim 9, wherein the model training module is further configured to compress a plurality of pictures superimposed with the known second watermark lattice, and use the compressed pictures as the sample pictures.
12. The apparatus according to claim 8, wherein the information retrieving module is further configured to obtain the first watermark lattice according to an output result corresponding to each first bit, obtain a first lattice sequence from the first watermark lattice, retrieve a fast response code according to the first lattice sequence and positioning information, version information, and timing information of a complementary fast response code, identify a decimal number in the fast response code, decode the obtained decimal number to obtain source information of the target picture, where the source information is watermark information superimposed on the target picture, and output the source information;
the source information comprises user information, equipment information and a timestamp for sending the target picture.
13. The apparatus of claim 8, further comprising a watermark superimposing module, wherein the watermark superimposing module is configured to obtain current user information, device information, and a timestamp before an outgoing original picture or a screenshot of the original picture, transcode the obtained user information, device information, and timestamp into a decimal number, generate a quick response code according to the transcoded decimal number, and extract a lattice sequence consisting of binary codes from the generated quick response code as the first lattice sequence; converting the first dot array sequence into a dot array consisting of mark points as a first watermark dot array according to the mapping relation between the binary code and the mark points; and superposing the first watermark dot matrix to the original picture to obtain the target picture, and sending out the target picture or carrying out screenshot on the target picture.
14. The apparatus according to claim 13, wherein the watermark superimposing module is further configured to identify, from the original picture, an area with a gray scale value change rate smaller than a preset threshold as a target area, and superimpose the first watermark lattice on the target area to obtain the target picture.
15. An electronic device, comprising:
at least one processor, at least one memory, a communication interface, and a bus; wherein,
the processor, the memory and the communication interface complete mutual communication through the bus;
the communication interface is used for information transmission between the electronic equipment and communication equipment of other electronic equipment;
the memory stores program instructions executable by the processor, the processor invoking the program instructions to perform the method of any of claims 1-7.
16. A non-transitory computer-readable storage medium storing computer instructions that cause a computer to perform the method of any one of claims 1 to 7.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811610063.3A CN109829475B (en) | 2018-12-27 | 2018-12-27 | Image dark watermark processing method and device based on deep learning |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201811610063.3A CN109829475B (en) | 2018-12-27 | 2018-12-27 | Image dark watermark processing method and device based on deep learning |
Publications (2)
Publication Number | Publication Date |
---|---|
CN109829475A true CN109829475A (en) | 2019-05-31 |
CN109829475B CN109829475B (en) | 2020-10-30 |
Family
ID=66861284
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201811610063.3A Active CN109829475B (en) | 2018-12-27 | 2018-12-27 | Image dark watermark processing method and device based on deep learning |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN109829475B (en) |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112634120A (en) * | 2020-12-30 | 2021-04-09 | 暨南大学 | Image reversible watermarking method based on CNN prediction |
CN113392669A (en) * | 2021-05-31 | 2021-09-14 | 苏州中科华影健康科技有限公司 | Image information detection method, detection device and storage medium |
CN115660933A (en) * | 2022-11-02 | 2023-01-31 | 北京奕之宣科技有限公司 | Method, device and equipment for identifying watermark information |
WO2024255505A1 (en) * | 2023-06-16 | 2024-12-19 | 京东方科技集团股份有限公司 | Watermark extraction method, model training method, watermark addition method, and electronic apparatus |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20020080992A1 (en) * | 2000-12-21 | 2002-06-27 | Decker Stephen K. | Watermarking holograms |
CN103150700A (en) * | 2013-03-25 | 2013-06-12 | 西南科技大学 | Method for embedding and extracting digital watermarks into/from image |
CN104810022A (en) * | 2015-05-11 | 2015-07-29 | 东北师范大学 | Time-domain digital audio watermarking method based on audio breakpoint |
CN108810619A (en) * | 2018-06-29 | 2018-11-13 | 北京奇虎科技有限公司 | Identify the method, apparatus and electronic equipment of watermark in video |
-
2018
- 2018-12-27 CN CN201811610063.3A patent/CN109829475B/en active Active
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20020080992A1 (en) * | 2000-12-21 | 2002-06-27 | Decker Stephen K. | Watermarking holograms |
CN103150700A (en) * | 2013-03-25 | 2013-06-12 | 西南科技大学 | Method for embedding and extracting digital watermarks into/from image |
CN104810022A (en) * | 2015-05-11 | 2015-07-29 | 东北师范大学 | Time-domain digital audio watermarking method based on audio breakpoint |
CN108810619A (en) * | 2018-06-29 | 2018-11-13 | 北京奇虎科技有限公司 | Identify the method, apparatus and electronic equipment of watermark in video |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN112634120A (en) * | 2020-12-30 | 2021-04-09 | 暨南大学 | Image reversible watermarking method based on CNN prediction |
CN113392669A (en) * | 2021-05-31 | 2021-09-14 | 苏州中科华影健康科技有限公司 | Image information detection method, detection device and storage medium |
CN115660933A (en) * | 2022-11-02 | 2023-01-31 | 北京奕之宣科技有限公司 | Method, device and equipment for identifying watermark information |
WO2024255505A1 (en) * | 2023-06-16 | 2024-12-19 | 京东方科技集团股份有限公司 | Watermark extraction method, model training method, watermark addition method, and electronic apparatus |
Also Published As
Publication number | Publication date |
---|---|
CN109829475B (en) | 2020-10-30 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Zhang et al. | SteganoGAN: High capacity image steganography with GANs | |
CN109829475B (en) | Image dark watermark processing method and device based on deep learning | |
CN110853033B (en) | Video detection method and device based on inter-frame similarity | |
US20180204562A1 (en) | Method and device for image recognition | |
US20220067888A1 (en) | Image processing method and apparatus, storage medium, and electronic device | |
CN112070649B (en) | Method and system for removing specific character string watermark | |
CN111741329B (en) | Video processing method, device, equipment and storage medium | |
CN114222181B (en) | Image processing method, device, equipment and medium | |
CN112511818A (en) | Video playing quality detection method and device | |
CN115810215A (en) | Face image generation method, device, equipment and storage medium | |
CN110427998A (en) | Model training, object detection method and device, electronic equipment, storage medium | |
CN110610131B (en) | Face movement unit detection method and device, electronic equipment and storage medium | |
CN114005019A (en) | Method for identifying copied image and related equipment thereof | |
CN114240770A (en) | Image processing method, device, server and storage medium | |
CN110298229B (en) | Video image processing method and device | |
CN112966230A (en) | Information steganography and extraction method, device and equipment | |
CN111539435A (en) | Semantic segmentation model construction method, image segmentation equipment and storage medium | |
CN117350910A (en) | Image watermark protection method based on diffusion image editing model | |
CN113610065B (en) | Handwriting recognition method and device | |
JP7539998B2 (en) | Zoom Agnostic Watermark Extraction | |
CN116091862A (en) | Picture quality identification method, device, equipment, storage medium and product | |
CN115311680A (en) | Human image quality detection method, device, electronic device and storage medium | |
CN114782969A (en) | Image table data extraction method and device based on generation countermeasure network | |
CN117597702A (en) | Scaling-independent watermark extraction | |
CN113744158A (en) | Image generation method and device, electronic equipment and storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
CB02 | Change of applicant information |
Address after: 100088 Building 3 332, 102, 28 Xinjiekouwai Street, Xicheng District, Beijing Applicant after: QAX Technology Group Inc. Address before: 100015 Jiuxianqiao Chaoyang District Beijing Road No. 10, building 15, floor 17, layer 1701-26, 3 Applicant before: BEIJING QIANXIN TECHNOLOGY Co.,Ltd. |
|
CB02 | Change of applicant information | ||
GR01 | Patent grant | ||
GR01 | Patent grant |