All those implemented functions and operators, should behaved just like you were working with MongoDB. Even raising error for same cause.
pip install montydb
lmdb
(for LMDB storage lightning
)pymongo
(for bson
)
bson
is opt-out by default even it's installed, set env var MONTY_ENABLE_BSON=1
to enable it.
>>> from montydb import MontyClient
>>> col = MontyClient(":memory:").db.test
>>> col.insert_many([{"stock": "A", "qty": 6}, {"stock": "A", "qty": 2}])
>>> cur = col.find({"stock": "A", "qty": {"$gt": 4}})
>>> next(cur)
{'_id': ObjectId('5ad34e537e8dd45d9c61a456'), 'stock': 'A', 'qty': 6}
The configuration process only required on repository creation or modification.
Currently, one repository can only assign one storage engine.
Memory
Memory storage does not need nor have any configuration, nothing saved to disk.
>>> from montydb import MontyClient
>>> client = MontyClient(":memory:")
FlatFile
FlatFile is the default on-disk storage engine.
>>> from montydb import MontyClient
>>> client = MontyClient("/db/repo")
FlatFile config:
[flatfile]
cache_modified: 0 # how many document CRUD cached before flush to disk.
LMDB (Lightning Memory-Mapped Database)
LMDB is NOT the default on-disk storage, need configuration first before get client.
Newly implemented.
>>> from montydb import set_storage, MontyClient
>>> set_storage("/db/repo", storage="lightning")
>>> client = MontyClient("/db/repo")
LMDB config:
[lightning]
map_size: 10485760 # Maximum size database may grow to.
SQLite
SQLite is NOT the default on-disk storage, need configuration first before get client.
Pre-existing sqlite storage file which saved by
montydb<=1.3.0
is not read/writeable aftermontydb==2.0.0
.
>>> from montydb import set_storage, MontyClient
>>> set_storage("/db/repo", storage="sqlite")
>>> client = MontyClient("/db/repo")
SQLite config:
[sqlite]
journal_mode: WAL
SQLite write concern:
>>> client = MontyClient("/db/repo",
>>> synchronous=1,
>>> automatic_index=False,
>>> busy_timeout=5000)
You could prefix the repository path with montydb URI scheme.
>>> client = MontyClient("montydb:///db/repo")
Pymongo
bson
may required.
montyimport
Imports content from an Extended JSON file into a MontyCollection instance.
The JSON file could be generated from montyexport
or mongoexport
.
>>> from montydb import open_repo, utils
>>> with open_repo("foo/bar"):
>>> utils.montyimport("db", "col", "/path/dump.json")
>>>
montyexport
Produces a JSON export of data stored in a MontyCollection instance.
The JSON file could be loaded by montyimport
or mongoimport
.
>>> from montydb import open_repo, utils
>>> with open_repo("foo/bar"):
>>> utils.montyexport("db", "col", "/data/dump.json")
>>>
montyrestore
Loads a binary database dump into a MontyCollection instance.
The BSON file could be generated from montydump
or mongodump
.
>>> from montydb import open_repo, utils
>>> with open_repo("foo/bar"):
>>> utils.montyrestore("db", "col", "/path/dump.bson")
>>>
montydump
Creates a binary export from a MontyCollection instance.
The BSON file could be loaded by montyrestore
or mongorestore
.
>>> from montydb import open_repo, utils
>>> with open_repo("foo/bar"):
>>> utils.montydump("db", "col", "/data/dump.bson")
>>>
MongoQueryRecorder
Record MongoDB query results in a period of time. Requires to access databse profiler.
This works via filtering the database profile data and reproduce the queries of find
and distinct
commands.
>>> from pymongo import MongoClient
>>> from montydb.utils import MongoQueryRecorder
>>> client = MongoClient()
>>> recorder = MongoQueryRecorder(client["mydb"])
>>> recorder.start()
>>> # Make some queries or run the App...
>>> recorder.stop()
>>> recorder.extract()
{<collection_1>: [<doc_1>, <doc_2>, ...], ...}
MontyList
Experimental, a subclass of list
, combined the common CRUD methods from Mongo's Collection and Cursor.
>>> from montydb.utils import MontyList
>>> mtl = MontyList([1, 2, {"a": 1}, {"a": 5}, {"a": 8}])
>>> mtl.find({"a": {"$gt": 3}})
MontyList([{'a': 5}, {'a': 8}])
Mainly for personal skill practicing and fun. I work in VFX industry, some of my production needs (mostly edge-case) requires to run in a limited environment (e.g. outsourced render farms), which may have problem to run or connect a MongoDB instance. And I found this project really helps.