This project demonstrates using the Tika-Python package (Python port of Apache Tika) to compute file similarity based on Metadata features.
The script can iterate over all files in the current directory or given files by command line and derives their metadata features, then computes the union of all features. The union of all features become the "golden feature set" that all document features are compared to via intersect. The length of that intersect per file divided by the length of the unioned set becomes the similarity score.
Scores are sorted in reverse (descending) order which can be shown in three different Data-Driven document visualizaions. A companion project to this effort is Auto Extractor which uses Apache Spark and Apache Nutch to take web crawl data, and produce D3-visualizations and clusters of similar pages.
pip install editdistance
git clone https://github.com/chrismattmann/tika-img-similarity
You can also check out ETLlib
Optional: Compute similarity only on specific IANA MIME Type(s) inside a directory using --accept
This compares metadata feature names as a golden feature set
#!/usr/bin/env python2.7
python similarity.py -f [directory of files] [--accept [jpeg pdf etc...]]
or
python similarity.py -c [file1 file2 file3 ...]
This compares metadata feature names together with its value as a golden feature set
#!/usr/bin/env python2.7
python value-similarity.py -f [directory of files] [--accept [jpeg pdf etc...]]
or
python value-similarity.py -c [file1 file2 file3 ...]
#!/usr/bin/env python2.7
python edit-value-similarity.py [-h] --inputDir INPUTDIR --outCSV OUTCSV [--accept [png pdf etc...]] [--allKeys]
--inputDir INPUTDIR path to directory containing files
--outCSV OUTCSV path to directory for storing the output CSV File, containing pair-wise Similarity Scores based on Edit distance
--accept [ACCEPT] Optional: compute similarity only on specified IANA MIME Type(s)
--allKeys Optional: compute edit distance across all metadata keys of 2 documents, else default to only intersection of metadata keys
Eg: python edit-value-similarity.py --inputDir /path/to/files --outCSV /path/to/output.csv --accept png pdf gif
#!/usr/bin/env python2.7
python cosine_similarity.py [-h] --inputDir INPUTDIR --outCSV OUTCSV [--accept [png pdf etc...]]
--inputDir INPUTDIR path to directory containing files
--outCSV OUTCSV path to directory for storing the output CSV File, containing pair-wise Similarity Scores based on Cosine distance
--accept [ACCEPT] Optional: compute similarity only on specified IANA MIME Type(s)
#!/usr/bin/env python2.7
python psykey.py --inputDir INPUTDIR --outCSV OUTCSV --wordlists WRODLIST_FOLDER
--inputDir INPUTDIR path to directory containing files
--outCSV OUTCSV path to directory for storing the output CSV File, containing pair-wise Similarity Scores based on Cosine distance of stylistic and authorship features
--wordlists WRODLIST_FOLDER path to the folder that contains files that are word list belonging to different classes. eg Use the wordlist folder provided with the tika-similarity library. If adding your own, make sure that the file is .txt with one word per line. Also, the name of the file will be considered the name of the class.
#!/usr/bin/env python2.7
Usage:
import metalevenshtein as metalev
print metalev.meta_levenshtein('abacus1cat','cat1cus')
To use all the argument options in this function:
def meta_levenshtein(string1,string2,Sim='levenshtein',theta=0.5,strict=-1,idf=dict()):
Implements ideas from the paper : Robust Similarity Measures for Named Entities Matching by Erwan et al.
Sim = jaro_winkler, levenshtein : can be chosen as the secondary matching function.
theta is the secondary similarity threshold: If set higher it will be more difficult for the strings to match.
strict=-1 for doing all permutations of the substrings
strict=1 for no permutations
idf=provide a dictionary for {string(word),float(idf od the word)}: More useful when mathings multi word entities (And word importances are very important)
like: 'harry potter', 'the wizard harry potter'
#!/usr/bin/env python2.7
import features as feat
data1=[1,2,3,3,2,1]
data2=[4,5,6,6,5,4]
area,error=feat.gaussian_overlap(data1,data2)
print area
Jaccard Similarity
python cluster-scores.py [-t threshold_value] (for generating cluster viz)
open cluster-d3.html(or dynamic-cluster.html for interactive viz) in your browser
Edit Distance & Cosine Similarity
python edit-cosine-cluster.py --inputCSV
open cluster-d3.html(or dynamic-cluster.html for interactive viz) in your browser
Default threshold value is 0.01.
Jaccard Similarity
python circle-packing.py (for generating circlepacking viz)
open circlepacking.html(or dynamic-circlepacking.html for interactive viz) in your browser
Edit Distance & Cosine Similarity
python edit-cosine-circle-packing.py --inputCSV
open circlepacking.html(or dynamic-circlepacking.html for interactive viz) in your browser
<img src="https://github.com/dongnizh/tika-img-similarity/blob/refactor/snapshots/circlepacking.png" width = "200px" height = "200px" style = "float:left">
<img src="https://github.com/dongnizh/tika-img-similarity/blob/refactor/snapshots/interactive-circlepacking.png" width = "200px" height = "200px" style = "float:right">
This is a combination of cluster viz and circle packing viz. The deeper color, the more the same attributes in the cluster.
* open compositeViz.html in your browser
Visualization of clustering from Jaccard Similarity result
* python sunburst.py (for generating circlepacking viz)
* open sunburst.html
if you are dealing with big data, you can use it this way:
* python generateLevelCluster.py (for generating level cluster viz)
* open levelCluster-d3.html in your browser
You can set max number for each node _maxNumNode(default _maxNumNode = 10) in generateLevelCluster.py
* python tree_map.py (for generating treemap viz)
* open tree_map.html in your browser
Send them to Chris A. Mattmann.
This project is licensed under the Apache License, version 2.0.