mirror of
https://github.com/mgerb/mywebsite
synced 2026-01-12 18:52:50 +00:00
updated bunch of file paths and changed the way posts are loaded
This commit is contained in:
1301
node_modules/mongodb/HISTORY.md
generated
vendored
Normal file
1301
node_modules/mongodb/HISTORY.md
generated
vendored
Normal file
File diff suppressed because it is too large
Load Diff
201
node_modules/mongodb/LICENSE
generated
vendored
Normal file
201
node_modules/mongodb/LICENSE
generated
vendored
Normal file
@@ -0,0 +1,201 @@
|
||||
Apache License
|
||||
Version 2.0, January 2004
|
||||
http://www.apache.org/licenses/
|
||||
|
||||
TERMS AND CONDITIONS FOR USE, REPRODUCTION, AND DISTRIBUTION
|
||||
|
||||
1. Definitions.
|
||||
|
||||
"License" shall mean the terms and conditions for use, reproduction,
|
||||
and distribution as defined by Sections 1 through 9 of this document.
|
||||
|
||||
"Licensor" shall mean the copyright owner or entity authorized by
|
||||
the copyright owner that is granting the License.
|
||||
|
||||
"Legal Entity" shall mean the union of the acting entity and all
|
||||
other entities that control, are controlled by, or are under common
|
||||
control with that entity. For the purposes of this definition,
|
||||
"control" means (i) the power, direct or indirect, to cause the
|
||||
direction or management of such entity, whether by contract or
|
||||
otherwise, or (ii) ownership of fifty percent (50%) or more of the
|
||||
outstanding shares, or (iii) beneficial ownership of such entity.
|
||||
|
||||
"You" (or "Your") shall mean an individual or Legal Entity
|
||||
exercising permissions granted by this License.
|
||||
|
||||
"Source" form shall mean the preferred form for making modifications,
|
||||
including but not limited to software source code, documentation
|
||||
source, and configuration files.
|
||||
|
||||
"Object" form shall mean any form resulting from mechanical
|
||||
transformation or translation of a Source form, including but
|
||||
not limited to compiled object code, generated documentation,
|
||||
and conversions to other media types.
|
||||
|
||||
"Work" shall mean the work of authorship, whether in Source or
|
||||
Object form, made available under the License, as indicated by a
|
||||
copyright notice that is included in or attached to the work
|
||||
(an example is provided in the Appendix below).
|
||||
|
||||
"Derivative Works" shall mean any work, whether in Source or Object
|
||||
form, that is based on (or derived from) the Work and for which the
|
||||
editorial revisions, annotations, elaborations, or other modifications
|
||||
represent, as a whole, an original work of authorship. For the purposes
|
||||
of this License, Derivative Works shall not include works that remain
|
||||
separable from, or merely link (or bind by name) to the interfaces of,
|
||||
the Work and Derivative Works thereof.
|
||||
|
||||
"Contribution" shall mean any work of authorship, including
|
||||
the original version of the Work and any modifications or additions
|
||||
to that Work or Derivative Works thereof, that is intentionally
|
||||
submitted to Licensor for inclusion in the Work by the copyright owner
|
||||
or by an individual or Legal Entity authorized to submit on behalf of
|
||||
the copyright owner. For the purposes of this definition, "submitted"
|
||||
means any form of electronic, verbal, or written communication sent
|
||||
to the Licensor or its representatives, including but not limited to
|
||||
communication on electronic mailing lists, source code control systems,
|
||||
and issue tracking systems that are managed by, or on behalf of, the
|
||||
Licensor for the purpose of discussing and improving the Work, but
|
||||
excluding communication that is conspicuously marked or otherwise
|
||||
designated in writing by the copyright owner as "Not a Contribution."
|
||||
|
||||
"Contributor" shall mean Licensor and any individual or Legal Entity
|
||||
on behalf of whom a Contribution has been received by Licensor and
|
||||
subsequently incorporated within the Work.
|
||||
|
||||
2. Grant of Copyright License. Subject to the terms and conditions of
|
||||
this License, each Contributor hereby grants to You a perpetual,
|
||||
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
|
||||
copyright license to reproduce, prepare Derivative Works of,
|
||||
publicly display, publicly perform, sublicense, and distribute the
|
||||
Work and such Derivative Works in Source or Object form.
|
||||
|
||||
3. Grant of Patent License. Subject to the terms and conditions of
|
||||
this License, each Contributor hereby grants to You a perpetual,
|
||||
worldwide, non-exclusive, no-charge, royalty-free, irrevocable
|
||||
(except as stated in this section) patent license to make, have made,
|
||||
use, offer to sell, sell, import, and otherwise transfer the Work,
|
||||
where such license applies only to those patent claims licensable
|
||||
by such Contributor that are necessarily infringed by their
|
||||
Contribution(s) alone or by combination of their Contribution(s)
|
||||
with the Work to which such Contribution(s) was submitted. If You
|
||||
institute patent litigation against any entity (including a
|
||||
cross-claim or counterclaim in a lawsuit) alleging that the Work
|
||||
or a Contribution incorporated within the Work constitutes direct
|
||||
or contributory patent infringement, then any patent licenses
|
||||
granted to You under this License for that Work shall terminate
|
||||
as of the date such litigation is filed.
|
||||
|
||||
4. Redistribution. You may reproduce and distribute copies of the
|
||||
Work or Derivative Works thereof in any medium, with or without
|
||||
modifications, and in Source or Object form, provided that You
|
||||
meet the following conditions:
|
||||
|
||||
(a) You must give any other recipients of the Work or
|
||||
Derivative Works a copy of this License; and
|
||||
|
||||
(b) You must cause any modified files to carry prominent notices
|
||||
stating that You changed the files; and
|
||||
|
||||
(c) You must retain, in the Source form of any Derivative Works
|
||||
that You distribute, all copyright, patent, trademark, and
|
||||
attribution notices from the Source form of the Work,
|
||||
excluding those notices that do not pertain to any part of
|
||||
the Derivative Works; and
|
||||
|
||||
(d) If the Work includes a "NOTICE" text file as part of its
|
||||
distribution, then any Derivative Works that You distribute must
|
||||
include a readable copy of the attribution notices contained
|
||||
within such NOTICE file, excluding those notices that do not
|
||||
pertain to any part of the Derivative Works, in at least one
|
||||
of the following places: within a NOTICE text file distributed
|
||||
as part of the Derivative Works; within the Source form or
|
||||
documentation, if provided along with the Derivative Works; or,
|
||||
within a display generated by the Derivative Works, if and
|
||||
wherever such third-party notices normally appear. The contents
|
||||
of the NOTICE file are for informational purposes only and
|
||||
do not modify the License. You may add Your own attribution
|
||||
notices within Derivative Works that You distribute, alongside
|
||||
or as an addendum to the NOTICE text from the Work, provided
|
||||
that such additional attribution notices cannot be construed
|
||||
as modifying the License.
|
||||
|
||||
You may add Your own copyright statement to Your modifications and
|
||||
may provide additional or different license terms and conditions
|
||||
for use, reproduction, or distribution of Your modifications, or
|
||||
for any such Derivative Works as a whole, provided Your use,
|
||||
reproduction, and distribution of the Work otherwise complies with
|
||||
the conditions stated in this License.
|
||||
|
||||
5. Submission of Contributions. Unless You explicitly state otherwise,
|
||||
any Contribution intentionally submitted for inclusion in the Work
|
||||
by You to the Licensor shall be under the terms and conditions of
|
||||
this License, without any additional terms or conditions.
|
||||
Notwithstanding the above, nothing herein shall supersede or modify
|
||||
the terms of any separate license agreement you may have executed
|
||||
with Licensor regarding such Contributions.
|
||||
|
||||
6. Trademarks. This License does not grant permission to use the trade
|
||||
names, trademarks, service marks, or product names of the Licensor,
|
||||
except as required for reasonable and customary use in describing the
|
||||
origin of the Work and reproducing the content of the NOTICE file.
|
||||
|
||||
7. Disclaimer of Warranty. Unless required by applicable law or
|
||||
agreed to in writing, Licensor provides the Work (and each
|
||||
Contributor provides its Contributions) on an "AS IS" BASIS,
|
||||
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or
|
||||
implied, including, without limitation, any warranties or conditions
|
||||
of TITLE, NON-INFRINGEMENT, MERCHANTABILITY, or FITNESS FOR A
|
||||
PARTICULAR PURPOSE. You are solely responsible for determining the
|
||||
appropriateness of using or redistributing the Work and assume any
|
||||
risks associated with Your exercise of permissions under this License.
|
||||
|
||||
8. Limitation of Liability. In no event and under no legal theory,
|
||||
whether in tort (including negligence), contract, or otherwise,
|
||||
unless required by applicable law (such as deliberate and grossly
|
||||
negligent acts) or agreed to in writing, shall any Contributor be
|
||||
liable to You for damages, including any direct, indirect, special,
|
||||
incidental, or consequential damages of any character arising as a
|
||||
result of this License or out of the use or inability to use the
|
||||
Work (including but not limited to damages for loss of goodwill,
|
||||
work stoppage, computer failure or malfunction, or any and all
|
||||
other commercial damages or losses), even if such Contributor
|
||||
has been advised of the possibility of such damages.
|
||||
|
||||
9. Accepting Warranty or Additional Liability. While redistributing
|
||||
the Work or Derivative Works thereof, You may choose to offer,
|
||||
and charge a fee for, acceptance of support, warranty, indemnity,
|
||||
or other liability obligations and/or rights consistent with this
|
||||
License. However, in accepting such obligations, You may act only
|
||||
on Your own behalf and on Your sole responsibility, not on behalf
|
||||
of any other Contributor, and only if You agree to indemnify,
|
||||
defend, and hold each Contributor harmless for any liability
|
||||
incurred by, or claims asserted against, such Contributor by reason
|
||||
of your accepting any such warranty or additional liability.
|
||||
|
||||
END OF TERMS AND CONDITIONS
|
||||
|
||||
APPENDIX: How to apply the Apache License to your work.
|
||||
|
||||
To apply the Apache License to your work, attach the following
|
||||
boilerplate notice, with the fields enclosed by brackets "{}"
|
||||
replaced with your own identifying information. (Don't include
|
||||
the brackets!) The text should be enclosed in the appropriate
|
||||
comment syntax for the file format. We also recommend that a
|
||||
file or class name and description of purpose be included on the
|
||||
same "printed page" as the copyright notice for easier
|
||||
identification within third-party archives.
|
||||
|
||||
Copyright {yyyy} {name of copyright owner}
|
||||
|
||||
Licensed under the Apache License, Version 2.0 (the "License");
|
||||
you may not use this file except in compliance with the License.
|
||||
You may obtain a copy of the License at
|
||||
|
||||
http://www.apache.org/licenses/LICENSE-2.0
|
||||
|
||||
Unless required by applicable law or agreed to in writing, software
|
||||
distributed under the License is distributed on an "AS IS" BASIS,
|
||||
WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
|
||||
See the License for the specific language governing permissions and
|
||||
limitations under the License.
|
||||
11
node_modules/mongodb/Makefile
generated
vendored
Normal file
11
node_modules/mongodb/Makefile
generated
vendored
Normal file
@@ -0,0 +1,11 @@
|
||||
NODE = node
|
||||
NPM = npm
|
||||
JSDOC = jsdoc
|
||||
name = all
|
||||
|
||||
generate_docs:
|
||||
# cp -R ./HISTORY.md ./docs/content/meta/release-notes.md
|
||||
hugo -s docs/reference -d ../../public
|
||||
$(JSDOC) -c conf.json -t docs/jsdoc-template/ -d ./public/api
|
||||
cp -R ./public/api/scripts ./public/.
|
||||
cp -R ./public/api/styles ./public/.
|
||||
415
node_modules/mongodb/README.md
generated
vendored
Normal file
415
node_modules/mongodb/README.md
generated
vendored
Normal file
@@ -0,0 +1,415 @@
|
||||
[](https://nodei.co/npm/mongodb/) [](https://nodei.co/npm/mongodb/)
|
||||
|
||||
[](http://travis-ci.org/mongodb/node-mongodb-native)
|
||||
|
||||
[](https://gitter.im/mongodb/node-mongodb-native?utm_source=badge&utm_medium=badge&utm_campaign=pr-badge)
|
||||
|
||||
# Description
|
||||
|
||||
The MongoDB driver is the high level part of the 2.1 or higher MongoDB driver and is meant for end users.
|
||||
|
||||
## MongoDB Node.JS Driver
|
||||
|
||||
| what | where |
|
||||
|---------------|------------------------------------------------|
|
||||
| documentation | http://mongodb.github.io/node-mongodb-native/ |
|
||||
| api-doc | http://mongodb.github.io/node-mongodb-native/2.1/api/ |
|
||||
| source | https://github.com/mongodb/node-mongodb-native |
|
||||
| mongodb | http://www.mongodb.org/ |
|
||||
|
||||
### Blogs of Engineers involved in the driver
|
||||
- Christian Kvalheim [@christkv](https://twitter.com/christkv) <http://christiankvalheim.com>
|
||||
|
||||
### Bugs / Feature Requests
|
||||
|
||||
Think you’ve found a bug? Want to see a new feature in node-mongodb-native? Please open a
|
||||
case in our issue management tool, JIRA:
|
||||
|
||||
- Create an account and login <https://jira.mongodb.org>.
|
||||
- Navigate to the NODE project <https://jira.mongodb.org/browse/NODE>.
|
||||
- Click **Create Issue** - Please provide as much information as possible about the issue type and how to reproduce it.
|
||||
|
||||
Bug reports in JIRA for all driver projects (i.e. NODE, PYTHON, CSHARP, JAVA) and the
|
||||
Core Server (i.e. SERVER) project are **public**.
|
||||
|
||||
### Questions and Bug Reports
|
||||
|
||||
* mailing list: https://groups.google.com/forum/#!forum/node-mongodb-native
|
||||
* jira: http://jira.mongodb.org/
|
||||
|
||||
### Change Log
|
||||
|
||||
http://jira.mongodb.org/browse/NODE
|
||||
|
||||
# Installation
|
||||
|
||||
The recommended way to get started using the Node.js 2.0 driver is by using the `NPM` (Node Package Manager) to install the dependency in your project.
|
||||
|
||||
## MongoDB Driver
|
||||
|
||||
Given that you have created your own project using `npm init` we install the mongodb driver and it's dependencies by executing the following `NPM` command.
|
||||
|
||||
```
|
||||
npm install mongodb --save
|
||||
```
|
||||
|
||||
This will download the MongoDB driver and add a dependency entry in your `package.json` file.
|
||||
|
||||
## Troubleshooting
|
||||
|
||||
The MongoDB driver depends on several other packages. These are.
|
||||
|
||||
* mongodb-core
|
||||
* bson
|
||||
* kerberos
|
||||
* node-gyp
|
||||
|
||||
The `kerberos` package is a C++ extension that requires a build environment to be installed on your system. You must be able to build node.js itself to be able to compile and install the `kerberos` module. Furthermore the `kerberos` module requires the MIT Kerberos package to correctly compile on UNIX operating systems. Consult your UNIX operation system package manager what libraries to install.
|
||||
|
||||
{{% note class="important" %}}
|
||||
Windows already contains the SSPI API used for Kerberos authentication. However you will need to install a full compiler tool chain using visual studio C++ to correctly install the kerberos extension.
|
||||
{{% /note %}}
|
||||
|
||||
### Diagnosing on UNIX
|
||||
|
||||
If you don’t have the build essentials it won’t build. In the case of linux you will need gcc and g++, node.js with all the headers and python. The easiest way to figure out what’s missing is by trying to build the kerberos project. You can do this by performing the following steps.
|
||||
|
||||
```
|
||||
git clone https://github.com/christkv/kerberos.git
|
||||
cd kerberos
|
||||
npm install
|
||||
```
|
||||
|
||||
If all the steps complete you have the right toolchain installed. If you get node-gyp not found you need to install it globally by doing.
|
||||
|
||||
```
|
||||
npm install -g node-gyp
|
||||
```
|
||||
|
||||
If correctly compiles and runs the tests you are golden. We can now try to install the mongod driver by performing the following command.
|
||||
|
||||
```
|
||||
cd yourproject
|
||||
npm install mongodb --save
|
||||
```
|
||||
|
||||
If it still fails the next step is to examine the npm log. Rerun the command but in this case in verbose mode.
|
||||
|
||||
```
|
||||
npm --loglevel verbose install mongodb
|
||||
```
|
||||
|
||||
This will print out all the steps npm is performing while trying to install the module.
|
||||
|
||||
### Diagnosing on Windows
|
||||
|
||||
A known compiler tool chain known to work for compiling `kerberos` on windows is the following.
|
||||
|
||||
* Visual Studio c++ 2010 (do not use higher versions)
|
||||
* Windows 7 64bit SDK
|
||||
* Python 2.7 or higher
|
||||
|
||||
Open visual studio command prompt. Ensure node.exe is in your path and install node-gyp.
|
||||
|
||||
```
|
||||
npm install -g node-gyp
|
||||
```
|
||||
|
||||
Next you will have to build the project manually to test it. Use any tool you use with git and grab the repo.
|
||||
|
||||
```
|
||||
git clone https://github.com/christkv/kerberos.git
|
||||
cd kerberos
|
||||
npm install
|
||||
node-gyp rebuild
|
||||
```
|
||||
|
||||
This should rebuild the driver successfully if you have everything set up correctly.
|
||||
|
||||
### Other possible issues
|
||||
|
||||
Your python installation might be hosed making gyp break. I always recommend that you test your deployment environment first by trying to build node itself on the server in question as this should unearth any issues with broken packages (and there are a lot of broken packages out there).
|
||||
|
||||
Another thing is to ensure your user has write permission to wherever the node modules are being installed.
|
||||
|
||||
QuickStart
|
||||
==========
|
||||
The quick start guide will show you how to setup a simple application using node.js and MongoDB. Its scope is only how to set up the driver and perform the simple crud operations. For more in depth coverage we encourage reading the tutorials.
|
||||
|
||||
Create the package.json file
|
||||
----------------------------
|
||||
Let's create a directory where our application will live. In our case we will put this under our projects directory.
|
||||
|
||||
```
|
||||
mkdir myproject
|
||||
cd myproject
|
||||
```
|
||||
|
||||
Enter the following command and answer the questions to create the initial structure for your new project
|
||||
|
||||
```
|
||||
npm init
|
||||
```
|
||||
|
||||
Next we need to edit the generated package.json file to add the dependency for the MongoDB driver. The package.json file below is just an example and your will look different depending on how you answered the questions after entering `npm init`
|
||||
|
||||
```
|
||||
{
|
||||
"name": "myproject",
|
||||
"version": "1.0.0",
|
||||
"description": "My first project",
|
||||
"main": "index.js",
|
||||
"repository": {
|
||||
"type": "git",
|
||||
"url": "git://github.com/christkv/myfirstproject.git"
|
||||
},
|
||||
"dependencies": {
|
||||
"mongodb": "~2.0"
|
||||
},
|
||||
"author": "Christian Kvalheim",
|
||||
"license": "Apache 2.0",
|
||||
"bugs": {
|
||||
"url": "https://github.com/christkv/myfirstproject/issues"
|
||||
},
|
||||
"homepage": "https://github.com/christkv/myfirstproject"
|
||||
}
|
||||
```
|
||||
|
||||
Save the file and return to the shell or command prompt and use **NPM** to install all the dependencies.
|
||||
|
||||
```
|
||||
npm install
|
||||
```
|
||||
|
||||
You should see **NPM** download a lot of files. Once it's done you'll find all the downloaded packages under the **node_modules** directory.
|
||||
|
||||
Booting up a MongoDB Server
|
||||
---------------------------
|
||||
Let's boot up a MongoDB server instance. Download the right MongoDB version from [MongoDB](http://www.mongodb.org), open a new shell or command line and ensure the **mongod** command is in the shell or command line path. Now let's create a database directory (in our case under **/data**).
|
||||
|
||||
```
|
||||
mongod --dbpath=/data --port 27017
|
||||
```
|
||||
|
||||
You should see the **mongod** process start up and print some status information.
|
||||
|
||||
Connecting to MongoDB
|
||||
---------------------
|
||||
Let's create a new **app.js** file that we will use to show the basic CRUD operations using the MongoDB driver.
|
||||
|
||||
First let's add code to connect to the server and the database **myproject**.
|
||||
|
||||
```js
|
||||
var MongoClient = require('mongodb').MongoClient
|
||||
, assert = require('assert');
|
||||
|
||||
// Connection URL
|
||||
var url = 'mongodb://localhost:27017/myproject';
|
||||
// Use connect method to connect to the Server
|
||||
MongoClient.connect(url, function(err, db) {
|
||||
assert.equal(null, err);
|
||||
console.log("Connected correctly to server");
|
||||
|
||||
db.close();
|
||||
});
|
||||
```
|
||||
|
||||
Given that you booted up the **mongod** process earlier the application should connect successfully and print **Connected correctly to server** to the console.
|
||||
|
||||
Let's Add some code to show the different CRUD operations available.
|
||||
|
||||
Inserting a Document
|
||||
--------------------
|
||||
Let's create a function that will insert some documents for us.
|
||||
|
||||
```js
|
||||
var insertDocuments = function(db, callback) {
|
||||
// Get the documents collection
|
||||
var collection = db.collection('documents');
|
||||
// Insert some documents
|
||||
collection.insertMany([
|
||||
{a : 1}, {a : 2}, {a : 3}
|
||||
], function(err, result) {
|
||||
assert.equal(err, null);
|
||||
assert.equal(3, result.result.n);
|
||||
assert.equal(3, result.ops.length);
|
||||
console.log("Inserted 3 documents into the document collection");
|
||||
callback(result);
|
||||
});
|
||||
}
|
||||
```
|
||||
|
||||
The insert command will return a results object that contains several fields that might be useful.
|
||||
|
||||
* **result** Contains the result document from MongoDB
|
||||
* **ops** Contains the documents inserted with added **_id** fields
|
||||
* **connection** Contains the connection used to perform the insert
|
||||
|
||||
Let's add call the **insertDocuments** command to the **MongoClient.connect** method callback.
|
||||
|
||||
```js
|
||||
var MongoClient = require('mongodb').MongoClient
|
||||
, assert = require('assert');
|
||||
|
||||
// Connection URL
|
||||
var url = 'mongodb://localhost:27017/myproject';
|
||||
// Use connect method to connect to the Server
|
||||
MongoClient.connect(url, function(err, db) {
|
||||
assert.equal(null, err);
|
||||
console.log("Connected correctly to server");
|
||||
|
||||
insertDocuments(db, function() {
|
||||
db.close();
|
||||
});
|
||||
});
|
||||
```
|
||||
|
||||
We can now run the update **app.js** file.
|
||||
|
||||
```
|
||||
node app.js
|
||||
```
|
||||
|
||||
You should see the following output after running the **app.js** file.
|
||||
|
||||
```
|
||||
Connected correctly to server
|
||||
Inserted 3 documents into the document collection
|
||||
```
|
||||
|
||||
Updating a document
|
||||
-------------------
|
||||
Let's look at how to do a simple document update by adding a new field **b** to the document that has the field **a** set to **2**.
|
||||
|
||||
```js
|
||||
var updateDocument = function(db, callback) {
|
||||
// Get the documents collection
|
||||
var collection = db.collection('documents');
|
||||
// Update document where a is 2, set b equal to 1
|
||||
collection.updateOne({ a : 2 }
|
||||
, { $set: { b : 1 } }, function(err, result) {
|
||||
assert.equal(err, null);
|
||||
assert.equal(1, result.result.n);
|
||||
console.log("Updated the document with the field a equal to 2");
|
||||
callback(result);
|
||||
});
|
||||
}
|
||||
```
|
||||
|
||||
The method will update the first document where the field **a** is equal to **2** by adding a new field **b** to the document set to **1**. Let's update the callback function from **MongoClient.connect** to include the update method.
|
||||
|
||||
```js
|
||||
var MongoClient = require('mongodb').MongoClient
|
||||
, assert = require('assert');
|
||||
|
||||
// Connection URL
|
||||
var url = 'mongodb://localhost:27017/myproject';
|
||||
// Use connect method to connect to the Server
|
||||
MongoClient.connect(url, function(err, db) {
|
||||
assert.equal(null, err);
|
||||
console.log("Connected correctly to server");
|
||||
|
||||
insertDocuments(db, function() {
|
||||
updateDocument(db, function() {
|
||||
db.close();
|
||||
});
|
||||
});
|
||||
});
|
||||
```
|
||||
|
||||
Delete a document
|
||||
-----------------
|
||||
Next lets delete the document where the field **a** equals to **3**.
|
||||
|
||||
```js
|
||||
var deleteDocument = function(db, callback) {
|
||||
// Get the documents collection
|
||||
var collection = db.collection('documents');
|
||||
// Insert some documents
|
||||
collection.deleteOne({ a : 3 }, function(err, result) {
|
||||
assert.equal(err, null);
|
||||
assert.equal(1, result.result.n);
|
||||
console.log("Removed the document with the field a equal to 3");
|
||||
callback(result);
|
||||
});
|
||||
}
|
||||
```
|
||||
|
||||
This will delete the first document where the field **a** equals to **3**. Let's add the method to the **MongoClient
|
||||
.connect** callback function.
|
||||
|
||||
```js
|
||||
var MongoClient = require('mongodb').MongoClient
|
||||
, assert = require('assert');
|
||||
|
||||
// Connection URL
|
||||
var url = 'mongodb://localhost:27017/myproject';
|
||||
// Use connect method to connect to the Server
|
||||
MongoClient.connect(url, function(err, db) {
|
||||
assert.equal(null, err);
|
||||
console.log("Connected correctly to server");
|
||||
|
||||
insertDocuments(db, function() {
|
||||
updateDocument(db, function() {
|
||||
deleteDocument(db, function() {
|
||||
db.close();
|
||||
});
|
||||
});
|
||||
});
|
||||
});
|
||||
```
|
||||
|
||||
Finally let's retrieve all the documents using a simple find.
|
||||
|
||||
Find All Documents
|
||||
------------------
|
||||
We will finish up the Quickstart CRUD methods by performing a simple query that returns all the documents matching the query.
|
||||
|
||||
```js
|
||||
var findDocuments = function(db, callback) {
|
||||
// Get the documents collection
|
||||
var collection = db.collection('documents');
|
||||
// Find some documents
|
||||
collection.find({}).toArray(function(err, docs) {
|
||||
assert.equal(err, null);
|
||||
assert.equal(2, docs.length);
|
||||
console.log("Found the following records");
|
||||
console.dir(docs);
|
||||
callback(docs);
|
||||
});
|
||||
}
|
||||
```
|
||||
|
||||
This query will return all the documents in the **documents** collection. Since we deleted a document the total
|
||||
documents returned is **2**. Finally let's add the findDocument method to the **MongoClient.connect** callback.
|
||||
|
||||
```js
|
||||
var MongoClient = require('mongodb').MongoClient
|
||||
, assert = require('assert');
|
||||
|
||||
// Connection URL
|
||||
var url = 'mongodb://localhost:27017/myproject';
|
||||
// Use connect method to connect to the Server
|
||||
MongoClient.connect(url, function(err, db) {
|
||||
assert.equal(null, err);
|
||||
console.log("Connected correctly to server");
|
||||
|
||||
insertDocuments(db, function() {
|
||||
updateDocument(db, function() {
|
||||
deleteDocument(db, function() {
|
||||
findDocuments(db, function() {
|
||||
db.close();
|
||||
});
|
||||
});
|
||||
});
|
||||
});
|
||||
});
|
||||
```
|
||||
|
||||
This concludes the QuickStart of connecting and performing some Basic operations using the MongoDB Node.js driver. For more detailed information you can look at the tutorials covering more specific topics of interest.
|
||||
|
||||
## Next Steps
|
||||
|
||||
* [MongoDB Documentation](http://mongodb.org/)
|
||||
* [Read about Schemas](http://learnmongodbthehardway.com/)
|
||||
* [Star us on GitHub](https://github.com/mongodb/node-mongodb-native)
|
||||
71
node_modules/mongodb/conf.json
generated
vendored
Normal file
71
node_modules/mongodb/conf.json
generated
vendored
Normal file
@@ -0,0 +1,71 @@
|
||||
{
|
||||
"plugins": ["plugins/markdown", "docs/lib/jsdoc/examples_plugin.js"],
|
||||
"source": {
|
||||
"include": [
|
||||
"test/functional/operation_example_tests.js",
|
||||
"test/functional/operation_promises_example_tests.js",
|
||||
"test/functional/operation_generators_example_tests.js",
|
||||
"test/functional/authentication_tests.js",
|
||||
"test/functional/gridfs_stream_tests.js",
|
||||
"lib/admin.js",
|
||||
"lib/collection.js",
|
||||
"lib/cursor.js",
|
||||
"lib/aggregation_cursor.js",
|
||||
"lib/command_cursor.js",
|
||||
"lib/db.js",
|
||||
"lib/mongo_client.js",
|
||||
"lib/mongos.js",
|
||||
"lib/read_preference.js",
|
||||
"lib/replset.js",
|
||||
"lib/server.js",
|
||||
"lib/bulk/common.js",
|
||||
"lib/bulk/ordered.js",
|
||||
"lib/bulk/unordered.js",
|
||||
"lib/gridfs/grid_store.js",
|
||||
"node_modules/mongodb-core/lib/error.js",
|
||||
"lib/gridfs-stream/index.js",
|
||||
"node_modules/mongodb-core/lib/connection/logger.js",
|
||||
"node_modules/bson/lib/bson/binary.js",
|
||||
"node_modules/bson/lib/bson/code.js",
|
||||
"node_modules/bson/lib/bson/db_ref.js",
|
||||
"node_modules/bson/lib/bson/double.js",
|
||||
"node_modules/bson/lib/bson/long.js",
|
||||
"node_modules/bson/lib/bson/objectid.js",
|
||||
"node_modules/bson/lib/bson/symbol.js",
|
||||
"node_modules/bson/lib/bson/timestamp.js",
|
||||
"node_modules/bson/lib/bson/max_key.js",
|
||||
"node_modules/bson/lib/bson/min_key.js"
|
||||
]
|
||||
},
|
||||
"templates": {
|
||||
"cleverLinks": true,
|
||||
"monospaceLinks": true,
|
||||
"default": {
|
||||
"outputSourceFiles" : true
|
||||
},
|
||||
"applicationName": "Node.js MongoDB Driver API",
|
||||
"disqus": true,
|
||||
"googleAnalytics": "UA-29229787-1",
|
||||
"openGraph": {
|
||||
"title": "",
|
||||
"type": "website",
|
||||
"image": "",
|
||||
"site_name": "",
|
||||
"url": ""
|
||||
},
|
||||
"meta": {
|
||||
"title": "",
|
||||
"description": "",
|
||||
"keyword": ""
|
||||
},
|
||||
"linenums": true
|
||||
},
|
||||
"markdown": {
|
||||
"parser": "gfm",
|
||||
"hardwrap": true,
|
||||
"tags": ["examples"]
|
||||
},
|
||||
"examples": {
|
||||
"indent": 4
|
||||
}
|
||||
}
|
||||
50
node_modules/mongodb/index.js
generated
vendored
Normal file
50
node_modules/mongodb/index.js
generated
vendored
Normal file
@@ -0,0 +1,50 @@
|
||||
// Core module
|
||||
var core = require('mongodb-core'),
|
||||
Instrumentation = require('./lib/apm');
|
||||
|
||||
// Set up the connect function
|
||||
var connect = require('./lib/mongo_client').connect;
|
||||
|
||||
// Expose error class
|
||||
connect.MongoError = core.MongoError;
|
||||
|
||||
// Actual driver classes exported
|
||||
connect.Admin = require('./lib/admin');
|
||||
connect.MongoClient = require('./lib/mongo_client');
|
||||
connect.Db = require('./lib/db');
|
||||
connect.Collection = require('./lib/collection');
|
||||
connect.Server = require('./lib/server');
|
||||
connect.ReplSet = require('./lib/replset');
|
||||
connect.Mongos = require('./lib/mongos');
|
||||
connect.ReadPreference = require('./lib/read_preference');
|
||||
connect.GridStore = require('./lib/gridfs/grid_store');
|
||||
connect.Chunk = require('./lib/gridfs/chunk');
|
||||
connect.Logger = core.Logger;
|
||||
connect.Cursor = require('./lib/cursor');
|
||||
connect.GridFSBucket = require('./lib/gridfs-stream');
|
||||
|
||||
// BSON types exported
|
||||
connect.Binary = core.BSON.Binary;
|
||||
connect.Code = core.BSON.Code;
|
||||
connect.Map = core.BSON.Map;
|
||||
connect.DBRef = core.BSON.DBRef;
|
||||
connect.Double = core.BSON.Double;
|
||||
connect.Long = core.BSON.Long;
|
||||
connect.MinKey = core.BSON.MinKey;
|
||||
connect.MaxKey = core.BSON.MaxKey;
|
||||
connect.ObjectID = core.BSON.ObjectID;
|
||||
connect.ObjectId = core.BSON.ObjectID;
|
||||
connect.Symbol = core.BSON.Symbol;
|
||||
connect.Timestamp = core.BSON.Timestamp;
|
||||
|
||||
// Add connect method
|
||||
connect.connect = connect;
|
||||
|
||||
// Set up the instrumentation method
|
||||
connect.instrument = function(options, callback) {
|
||||
if(typeof options == 'function') callback = options, options = {};
|
||||
return new Instrumentation(core, options, callback);
|
||||
}
|
||||
|
||||
// Set our exports to be the connect function
|
||||
module.exports = connect;
|
||||
581
node_modules/mongodb/lib/admin.js
generated
vendored
Normal file
581
node_modules/mongodb/lib/admin.js
generated
vendored
Normal file
@@ -0,0 +1,581 @@
|
||||
"use strict";
|
||||
|
||||
var toError = require('./utils').toError,
|
||||
Define = require('./metadata'),
|
||||
shallowClone = require('./utils').shallowClone;
|
||||
|
||||
/**
|
||||
* @fileOverview The **Admin** class is an internal class that allows convenient access to
|
||||
* the admin functionality and commands for MongoDB.
|
||||
*
|
||||
* **ADMIN Cannot directly be instantiated**
|
||||
* @example
|
||||
* var MongoClient = require('mongodb').MongoClient,
|
||||
* test = require('assert');
|
||||
* // Connection url
|
||||
* var url = 'mongodb://localhost:27017/test';
|
||||
* // Connect using MongoClient
|
||||
* MongoClient.connect(url, function(err, db) {
|
||||
* // Use the admin database for the operation
|
||||
* var adminDb = db.admin();
|
||||
*
|
||||
* // List all the available databases
|
||||
* adminDb.listDatabases(function(err, dbs) {
|
||||
* test.equal(null, err);
|
||||
* test.ok(dbs.databases.length > 0);
|
||||
* db.close();
|
||||
* });
|
||||
* });
|
||||
*/
|
||||
|
||||
/**
|
||||
* Create a new Admin instance (INTERNAL TYPE, do not instantiate directly)
|
||||
* @class
|
||||
* @return {Admin} a collection instance.
|
||||
*/
|
||||
var Admin = function(db, topology, promiseLibrary) {
|
||||
if(!(this instanceof Admin)) return new Admin(db, topology);
|
||||
var self = this;
|
||||
|
||||
// Internal state
|
||||
this.s = {
|
||||
db: db
|
||||
, topology: topology
|
||||
, promiseLibrary: promiseLibrary
|
||||
}
|
||||
}
|
||||
|
||||
var define = Admin.define = new Define('Admin', Admin, false);
|
||||
|
||||
/**
|
||||
* The callback format for results
|
||||
* @callback Admin~resultCallback
|
||||
* @param {MongoError} error An error instance representing the error during the execution.
|
||||
* @param {object} result The result object if the command was executed successfully.
|
||||
*/
|
||||
|
||||
/**
|
||||
* Execute a command
|
||||
* @method
|
||||
* @param {object} command The command hash
|
||||
* @param {object} [options=null] Optional settings.
|
||||
* @param {(ReadPreference|string)} [options.readPreference=null] The preferred read preference (ReadPreference.PRIMARY, ReadPreference.PRIMARY_PREFERRED, ReadPreference.SECONDARY, ReadPreference.SECONDARY_PREFERRED, ReadPreference.NEAREST).
|
||||
* @param {number} [options.maxTimeMS=null] Number of milliseconds to wait before aborting the query.
|
||||
* @param {Admin~resultCallback} [callback] The command result callback
|
||||
* @return {Promise} returns Promise if no callback passed
|
||||
*/
|
||||
Admin.prototype.command = function(command, options, callback) {
|
||||
var self = this;
|
||||
var args = Array.prototype.slice.call(arguments, 1);
|
||||
callback = args.pop();
|
||||
if(typeof callback != 'function') args.push(callback);
|
||||
options = args.length ? args.shift() : {};
|
||||
|
||||
// Execute using callback
|
||||
if(typeof callback == 'function') return this.s.db.executeDbAdminCommand(command, options, function(err, doc) {
|
||||
return callback != null ? callback(err, doc) : null;
|
||||
});
|
||||
|
||||
// Return a Promise
|
||||
return new this.s.promiseLibrary(function(resolve, reject) {
|
||||
self.s.db.executeDbAdminCommand(command, options, function(err, doc) {
|
||||
if(err) return reject(err);
|
||||
resolve(doc);
|
||||
});
|
||||
});
|
||||
}
|
||||
|
||||
define.classMethod('command', {callback: true, promise:true});
|
||||
|
||||
/**
|
||||
* Retrieve the server information for the current
|
||||
* instance of the db client
|
||||
*
|
||||
* @param {Admin~resultCallback} [callback] The command result callback
|
||||
* @return {Promise} returns Promise if no callback passed
|
||||
*/
|
||||
Admin.prototype.buildInfo = function(callback) {
|
||||
var self = this;
|
||||
// Execute using callback
|
||||
if(typeof callback == 'function') return this.serverInfo(callback);
|
||||
|
||||
// Return a Promise
|
||||
return new this.s.promiseLibrary(function(resolve, reject) {
|
||||
self.serverInfo(function(err, r) {
|
||||
if(err) return reject(err);
|
||||
resolve(r);
|
||||
});
|
||||
});
|
||||
}
|
||||
|
||||
define.classMethod('buildInfo', {callback: true, promise:true});
|
||||
|
||||
/**
|
||||
* Retrieve the server information for the current
|
||||
* instance of the db client
|
||||
*
|
||||
* @param {Admin~resultCallback} [callback] The command result callback
|
||||
* @return {Promise} returns Promise if no callback passed
|
||||
*/
|
||||
Admin.prototype.serverInfo = function(callback) {
|
||||
var self = this;
|
||||
// Execute using callback
|
||||
if(typeof callback == 'function') return this.s.db.executeDbAdminCommand({buildinfo:1}, function(err, doc) {
|
||||
if(err != null) return callback(err, null);
|
||||
callback(null, doc);
|
||||
});
|
||||
|
||||
// Return a Promise
|
||||
return new this.s.promiseLibrary(function(resolve, reject) {
|
||||
self.s.db.executeDbAdminCommand({buildinfo:1}, function(err, doc) {
|
||||
if(err) return reject(err);
|
||||
resolve(doc);
|
||||
});
|
||||
});
|
||||
}
|
||||
|
||||
define.classMethod('serverInfo', {callback: true, promise:true});
|
||||
|
||||
/**
|
||||
* Retrieve this db's server status.
|
||||
*
|
||||
* @param {Admin~resultCallback} [callback] The command result callback
|
||||
* @return {Promise} returns Promise if no callback passed
|
||||
*/
|
||||
Admin.prototype.serverStatus = function(callback) {
|
||||
var self = this;
|
||||
|
||||
// Execute using callback
|
||||
if(typeof callback == 'function') return serverStatus(self, callback)
|
||||
|
||||
// Return a Promise
|
||||
return new this.s.promiseLibrary(function(resolve, reject) {
|
||||
serverStatus(self, function(err, r) {
|
||||
if(err) return reject(err);
|
||||
resolve(r);
|
||||
});
|
||||
});
|
||||
};
|
||||
|
||||
var serverStatus = function(self, callback) {
|
||||
self.s.db.executeDbAdminCommand({serverStatus: 1}, function(err, doc) {
|
||||
if(err == null && doc.ok === 1) {
|
||||
callback(null, doc);
|
||||
} else {
|
||||
if(err) return callback(err, false);
|
||||
return callback(toError(doc), false);
|
||||
}
|
||||
});
|
||||
}
|
||||
|
||||
define.classMethod('serverStatus', {callback: true, promise:true});
|
||||
|
||||
/**
|
||||
* Retrieve the current profiling Level for MongoDB
|
||||
*
|
||||
* @param {Admin~resultCallback} [callback] The command result callback
|
||||
* @return {Promise} returns Promise if no callback passed
|
||||
*/
|
||||
Admin.prototype.profilingLevel = function(callback) {
|
||||
var self = this;
|
||||
|
||||
// Execute using callback
|
||||
if(typeof callback == 'function') return profilingLevel(self, callback)
|
||||
|
||||
// Return a Promise
|
||||
return new this.s.promiseLibrary(function(resolve, reject) {
|
||||
profilingLevel(self, function(err, r) {
|
||||
if(err) return reject(err);
|
||||
resolve(r);
|
||||
});
|
||||
});
|
||||
};
|
||||
|
||||
var profilingLevel = function(self, callback) {
|
||||
self.s.db.executeDbAdminCommand({profile:-1}, function(err, doc) {
|
||||
doc = doc;
|
||||
|
||||
if(err == null && doc.ok === 1) {
|
||||
var was = doc.was;
|
||||
if(was == 0) return callback(null, "off");
|
||||
if(was == 1) return callback(null, "slow_only");
|
||||
if(was == 2) return callback(null, "all");
|
||||
return callback(new Error("Error: illegal profiling level value " + was), null);
|
||||
} else {
|
||||
err != null ? callback(err, null) : callback(new Error("Error with profile command"), null);
|
||||
}
|
||||
});
|
||||
}
|
||||
|
||||
define.classMethod('profilingLevel', {callback: true, promise:true});
|
||||
|
||||
/**
|
||||
* Ping the MongoDB server and retrieve results
|
||||
*
|
||||
* @param {Admin~resultCallback} [callback] The command result callback
|
||||
* @return {Promise} returns Promise if no callback passed
|
||||
*/
|
||||
Admin.prototype.ping = function(options, callback) {
|
||||
var self = this;
|
||||
var args = Array.prototype.slice.call(arguments, 0);
|
||||
callback = args.pop();
|
||||
if(typeof callback != 'function') args.push(callback);
|
||||
|
||||
// Execute using callback
|
||||
if(typeof callback == 'function') return this.s.db.executeDbAdminCommand({ping: 1}, callback);
|
||||
|
||||
// Return a Promise
|
||||
return new this.s.promiseLibrary(function(resolve, reject) {
|
||||
self.s.db.executeDbAdminCommand({ping: 1}, function(err, r) {
|
||||
if(err) return reject(err);
|
||||
resolve(r);
|
||||
});
|
||||
});
|
||||
}
|
||||
|
||||
define.classMethod('ping', {callback: true, promise:true});
|
||||
|
||||
/**
|
||||
* Authenticate a user against the server.
|
||||
* @method
|
||||
* @param {string} username The username.
|
||||
* @param {string} [password] The password.
|
||||
* @param {Admin~resultCallback} [callback] The command result callback
|
||||
* @return {Promise} returns Promise if no callback passed
|
||||
*/
|
||||
Admin.prototype.authenticate = function(username, password, options, callback) {
|
||||
var self = this;
|
||||
if(typeof options == 'function') callback = options, options = {};
|
||||
options = shallowClone(options);
|
||||
options.authdb = 'admin';
|
||||
|
||||
// Execute using callback
|
||||
if(typeof callback == 'function') return this.s.db.authenticate(username, password, options, callback);
|
||||
|
||||
// Return a Promise
|
||||
return new this.s.promiseLibrary(function(resolve, reject) {
|
||||
self.s.db.authenticate(username, password, options, function(err, r) {
|
||||
if(err) return reject(err);
|
||||
resolve(r);
|
||||
});
|
||||
});
|
||||
}
|
||||
|
||||
define.classMethod('authenticate', {callback: true, promise:true});
|
||||
|
||||
/**
|
||||
* Logout user from server, fire off on all connections and remove all auth info
|
||||
* @method
|
||||
* @param {Admin~resultCallback} [callback] The command result callback
|
||||
* @return {Promise} returns Promise if no callback passed
|
||||
*/
|
||||
Admin.prototype.logout = function(callback) {
|
||||
var self = this;
|
||||
// Execute using callback
|
||||
if(typeof callback == 'function') return this.s.db.logout({authdb: 'admin'}, callback);
|
||||
|
||||
// Return a Promise
|
||||
return new this.s.promiseLibrary(function(resolve, reject) {
|
||||
self.s.db.logout({authdb: 'admin'}, function(err, r) {
|
||||
if(err) return reject(err);
|
||||
resolve(r);
|
||||
});
|
||||
});
|
||||
}
|
||||
|
||||
define.classMethod('logout', {callback: true, promise:true});
|
||||
|
||||
// Get write concern
|
||||
var writeConcern = function(options, db) {
|
||||
options = shallowClone(options);
|
||||
|
||||
// If options already contain write concerns return it
|
||||
if(options.w || options.wtimeout || options.j || options.fsync) {
|
||||
return options;
|
||||
}
|
||||
|
||||
// Set db write concern if available
|
||||
if(db.writeConcern) {
|
||||
if(options.w) options.w = db.writeConcern.w;
|
||||
if(options.wtimeout) options.wtimeout = db.writeConcern.wtimeout;
|
||||
if(options.j) options.j = db.writeConcern.j;
|
||||
if(options.fsync) options.fsync = db.writeConcern.fsync;
|
||||
}
|
||||
|
||||
// Return modified options
|
||||
return options;
|
||||
}
|
||||
|
||||
/**
|
||||
* Add a user to the database.
|
||||
* @method
|
||||
* @param {string} username The username.
|
||||
* @param {string} password The password.
|
||||
* @param {object} [options=null] Optional settings.
|
||||
* @param {(number|string)} [options.w=null] The write concern.
|
||||
* @param {number} [options.wtimeout=null] The write concern timeout.
|
||||
* @param {boolean} [options.j=false] Specify a journal write concern.
|
||||
* @param {boolean} [options.fsync=false] Specify a file sync write concern.
|
||||
* @param {object} [options.customData=null] Custom data associated with the user (only Mongodb 2.6 or higher)
|
||||
* @param {object[]} [options.roles=null] Roles associated with the created user (only Mongodb 2.6 or higher)
|
||||
* @param {Admin~resultCallback} [callback] The command result callback
|
||||
* @return {Promise} returns Promise if no callback passed
|
||||
*/
|
||||
Admin.prototype.addUser = function(username, password, options, callback) {
|
||||
var self = this;
|
||||
var args = Array.prototype.slice.call(arguments, 2);
|
||||
callback = args.pop();
|
||||
if(typeof callback != 'function') args.push(callback);
|
||||
options = args.length ? args.shift() : {};
|
||||
options = options || {};
|
||||
// Get the options
|
||||
options = writeConcern(options, self.s.db)
|
||||
// Set the db name to admin
|
||||
options.dbName = 'admin';
|
||||
|
||||
// Execute using callback
|
||||
if(typeof callback == 'function')
|
||||
return self.s.db.addUser(username, password, options, callback);
|
||||
|
||||
// Return a Promise
|
||||
return new this.s.promiseLibrary(function(resolve, reject) {
|
||||
self.s.db.addUser(username, password, options, function(err, r) {
|
||||
if(err) return reject(err);
|
||||
resolve(r);
|
||||
});
|
||||
});
|
||||
}
|
||||
|
||||
define.classMethod('addUser', {callback: true, promise:true});
|
||||
|
||||
/**
|
||||
* Remove a user from a database
|
||||
* @method
|
||||
* @param {string} username The username.
|
||||
* @param {object} [options=null] Optional settings.
|
||||
* @param {(number|string)} [options.w=null] The write concern.
|
||||
* @param {number} [options.wtimeout=null] The write concern timeout.
|
||||
* @param {boolean} [options.j=false] Specify a journal write concern.
|
||||
* @param {boolean} [options.fsync=false] Specify a file sync write concern.
|
||||
* @param {Admin~resultCallback} [callback] The command result callback
|
||||
* @return {Promise} returns Promise if no callback passed
|
||||
*/
|
||||
Admin.prototype.removeUser = function(username, options, callback) {
|
||||
var self = this;
|
||||
var args = Array.prototype.slice.call(arguments, 1);
|
||||
callback = args.pop();
|
||||
if(typeof callback != 'function') args.push(callback);
|
||||
options = args.length ? args.shift() : {};
|
||||
options = options || {};
|
||||
// Get the options
|
||||
options = writeConcern(options, self.s.db)
|
||||
// Set the db name
|
||||
options.dbName = 'admin';
|
||||
|
||||
// Execute using callback
|
||||
if(typeof callback == 'function')
|
||||
return self.s.db.removeUser(username, options, callback);
|
||||
|
||||
// Return a Promise
|
||||
return new this.s.promiseLibrary(function(resolve, reject) {
|
||||
self.s.db.removeUser(username, options, function(err, r) {
|
||||
if(err) return reject(err);
|
||||
resolve(r);
|
||||
});
|
||||
});
|
||||
}
|
||||
|
||||
define.classMethod('removeUser', {callback: true, promise:true});
|
||||
|
||||
/**
|
||||
* Set the current profiling level of MongoDB
|
||||
*
|
||||
* @param {string} level The new profiling level (off, slow_only, all).
|
||||
* @param {Admin~resultCallback} [callback] The command result callback.
|
||||
* @return {Promise} returns Promise if no callback passed
|
||||
*/
|
||||
Admin.prototype.setProfilingLevel = function(level, callback) {
|
||||
var self = this;
|
||||
|
||||
// Execute using callback
|
||||
if(typeof callback == 'function') return setProfilingLevel(self, level, callback);
|
||||
|
||||
// Return a Promise
|
||||
return new this.s.promiseLibrary(function(resolve, reject) {
|
||||
setProfilingLevel(self, level, function(err, r) {
|
||||
if(err) return reject(err);
|
||||
resolve(r);
|
||||
});
|
||||
});
|
||||
};
|
||||
|
||||
var setProfilingLevel = function(self, level, callback) {
|
||||
var command = {};
|
||||
var profile = 0;
|
||||
|
||||
if(level == "off") {
|
||||
profile = 0;
|
||||
} else if(level == "slow_only") {
|
||||
profile = 1;
|
||||
} else if(level == "all") {
|
||||
profile = 2;
|
||||
} else {
|
||||
return callback(new Error("Error: illegal profiling level value " + level));
|
||||
}
|
||||
|
||||
// Set up the profile number
|
||||
command['profile'] = profile;
|
||||
|
||||
self.s.db.executeDbAdminCommand(command, function(err, doc) {
|
||||
doc = doc;
|
||||
|
||||
if(err == null && doc.ok === 1)
|
||||
return callback(null, level);
|
||||
return err != null ? callback(err, null) : callback(new Error("Error with profile command"), null);
|
||||
});
|
||||
}
|
||||
|
||||
define.classMethod('setProfilingLevel', {callback: true, promise:true});
|
||||
|
||||
/**
|
||||
* Retrive the current profiling information for MongoDB
|
||||
*
|
||||
* @param {Admin~resultCallback} [callback] The command result callback.
|
||||
* @return {Promise} returns Promise if no callback passed
|
||||
*/
|
||||
Admin.prototype.profilingInfo = function(callback) {
|
||||
var self = this;
|
||||
|
||||
// Execute using callback
|
||||
if(typeof callback == 'function') return profilingInfo(self, callback);
|
||||
|
||||
// Return a Promise
|
||||
return new this.s.promiseLibrary(function(resolve, reject) {
|
||||
profilingInfo(self, function(err, r) {
|
||||
if(err) return reject(err);
|
||||
resolve(r);
|
||||
});
|
||||
});
|
||||
};
|
||||
|
||||
var profilingInfo = function(self, callback) {
|
||||
try {
|
||||
self.s.topology.cursor("admin.system.profile", { find: 'system.profile', query: {}}, {}).toArray(callback);
|
||||
} catch (err) {
|
||||
return callback(err, null);
|
||||
}
|
||||
}
|
||||
|
||||
define.classMethod('profilingLevel', {callback: true, promise:true});
|
||||
|
||||
/**
|
||||
* Validate an existing collection
|
||||
*
|
||||
* @param {string} collectionName The name of the collection to validate.
|
||||
* @param {object} [options=null] Optional settings.
|
||||
* @param {Admin~resultCallback} [callback] The command result callback.
|
||||
* @return {Promise} returns Promise if no callback passed
|
||||
*/
|
||||
Admin.prototype.validateCollection = function(collectionName, options, callback) {
|
||||
var self = this;
|
||||
var args = Array.prototype.slice.call(arguments, 1);
|
||||
callback = args.pop();
|
||||
if(typeof callback != 'function') args.push(callback);
|
||||
options = args.length ? args.shift() : {};
|
||||
options = options || {};
|
||||
|
||||
// Execute using callback
|
||||
if(typeof callback == 'function')
|
||||
return validateCollection(self, collectionName, options, callback);
|
||||
|
||||
// Return a Promise
|
||||
return new this.s.promiseLibrary(function(resolve, reject) {
|
||||
validateCollection(self, collectionName, options, function(err, r) {
|
||||
if(err) return reject(err);
|
||||
resolve(r);
|
||||
});
|
||||
});
|
||||
};
|
||||
|
||||
var validateCollection = function(self, collectionName, options, callback) {
|
||||
var command = {validate: collectionName};
|
||||
var keys = Object.keys(options);
|
||||
|
||||
// Decorate command with extra options
|
||||
for(var i = 0; i < keys.length; i++) {
|
||||
if(options.hasOwnProperty(keys[i])) {
|
||||
command[keys[i]] = options[keys[i]];
|
||||
}
|
||||
}
|
||||
|
||||
self.s.db.command(command, function(err, doc) {
|
||||
if(err != null) return callback(err, null);
|
||||
|
||||
if(doc.ok === 0)
|
||||
return callback(new Error("Error with validate command"), null);
|
||||
if(doc.result != null && doc.result.constructor != String)
|
||||
return callback(new Error("Error with validation data"), null);
|
||||
if(doc.result != null && doc.result.match(/exception|corrupt/) != null)
|
||||
return callback(new Error("Error: invalid collection " + collectionName), null);
|
||||
if(doc.valid != null && !doc.valid)
|
||||
return callback(new Error("Error: invalid collection " + collectionName), null);
|
||||
|
||||
return callback(null, doc);
|
||||
});
|
||||
}
|
||||
|
||||
define.classMethod('validateCollection', {callback: true, promise:true});
|
||||
|
||||
/**
|
||||
* List the available databases
|
||||
*
|
||||
* @param {Admin~resultCallback} [callback] The command result callback.
|
||||
* @return {Promise} returns Promise if no callback passed
|
||||
*/
|
||||
Admin.prototype.listDatabases = function(callback) {
|
||||
var self = this;
|
||||
// Execute using callback
|
||||
if(typeof callback == 'function') return self.s.db.executeDbAdminCommand({listDatabases:1}, {}, callback);
|
||||
|
||||
// Return a Promise
|
||||
return new this.s.promiseLibrary(function(resolve, reject) {
|
||||
self.s.db.executeDbAdminCommand({listDatabases:1}, {}, function(err, r) {
|
||||
if(err) return reject(err);
|
||||
resolve(r);
|
||||
});
|
||||
});
|
||||
}
|
||||
|
||||
define.classMethod('listDatabases', {callback: true, promise:true});
|
||||
|
||||
/**
|
||||
* Get ReplicaSet status
|
||||
*
|
||||
* @param {Admin~resultCallback} [callback] The command result callback.
|
||||
* @return {Promise} returns Promise if no callback passed
|
||||
*/
|
||||
Admin.prototype.replSetGetStatus = function(callback) {
|
||||
var self = this;
|
||||
// Execute using callback
|
||||
if(typeof callback == 'function') return replSetGetStatus(self, callback);
|
||||
// Return a Promise
|
||||
return new this.s.promiseLibrary(function(resolve, reject) {
|
||||
replSetGetStatus(self, function(err, r) {
|
||||
if(err) return reject(err);
|
||||
resolve(r);
|
||||
});
|
||||
});
|
||||
};
|
||||
|
||||
var replSetGetStatus = function(self, callback) {
|
||||
self.s.db.executeDbAdminCommand({replSetGetStatus:1}, function(err, doc) {
|
||||
if(err == null && doc.ok === 1)
|
||||
return callback(null, doc);
|
||||
if(err) return callback(err, false);
|
||||
callback(toError(doc), false);
|
||||
});
|
||||
}
|
||||
|
||||
define.classMethod('replSetGetStatus', {callback: true, promise:true});
|
||||
|
||||
module.exports = Admin;
|
||||
432
node_modules/mongodb/lib/aggregation_cursor.js
generated
vendored
Normal file
432
node_modules/mongodb/lib/aggregation_cursor.js
generated
vendored
Normal file
@@ -0,0 +1,432 @@
|
||||
"use strict";
|
||||
|
||||
var inherits = require('util').inherits
|
||||
, f = require('util').format
|
||||
, toError = require('./utils').toError
|
||||
, getSingleProperty = require('./utils').getSingleProperty
|
||||
, formattedOrderClause = require('./utils').formattedOrderClause
|
||||
, handleCallback = require('./utils').handleCallback
|
||||
, Logger = require('mongodb-core').Logger
|
||||
, EventEmitter = require('events').EventEmitter
|
||||
, ReadPreference = require('./read_preference')
|
||||
, MongoError = require('mongodb-core').MongoError
|
||||
, Readable = require('stream').Readable || require('readable-stream').Readable
|
||||
, Define = require('./metadata')
|
||||
, CoreCursor = require('./cursor')
|
||||
, Query = require('mongodb-core').Query
|
||||
, CoreReadPreference = require('mongodb-core').ReadPreference;
|
||||
|
||||
/**
|
||||
* @fileOverview The **AggregationCursor** class is an internal class that embodies an aggregation cursor on MongoDB
|
||||
* allowing for iteration over the results returned from the underlying query. It supports
|
||||
* one by one document iteration, conversion to an array or can be iterated as a Node 0.10.X
|
||||
* or higher stream
|
||||
*
|
||||
* **AGGREGATIONCURSOR Cannot directly be instantiated**
|
||||
* @example
|
||||
* var MongoClient = require('mongodb').MongoClient,
|
||||
* test = require('assert');
|
||||
* // Connection url
|
||||
* var url = 'mongodb://localhost:27017/test';
|
||||
* // Connect using MongoClient
|
||||
* MongoClient.connect(url, function(err, db) {
|
||||
* // Create a collection we want to drop later
|
||||
* var col = db.collection('createIndexExample1');
|
||||
* // Insert a bunch of documents
|
||||
* col.insert([{a:1, b:1}
|
||||
* , {a:2, b:2}, {a:3, b:3}
|
||||
* , {a:4, b:4}], {w:1}, function(err, result) {
|
||||
* test.equal(null, err);
|
||||
* // Show that duplicate records got dropped
|
||||
* col.aggregation({}, {cursor: {}}).toArray(function(err, items) {
|
||||
* test.equal(null, err);
|
||||
* test.equal(4, items.length);
|
||||
* db.close();
|
||||
* });
|
||||
* });
|
||||
* });
|
||||
*/
|
||||
|
||||
/**
|
||||
* Namespace provided by the browser.
|
||||
* @external Readable
|
||||
*/
|
||||
|
||||
/**
|
||||
* Creates a new Aggregation Cursor instance (INTERNAL TYPE, do not instantiate directly)
|
||||
* @class AggregationCursor
|
||||
* @extends external:Readable
|
||||
* @fires AggregationCursor#data
|
||||
* @fires AggregationCursor#end
|
||||
* @fires AggregationCursor#close
|
||||
* @fires AggregationCursor#readable
|
||||
* @return {AggregationCursor} an AggregationCursor instance.
|
||||
*/
|
||||
var AggregationCursor = function(bson, ns, cmd, options, topology, topologyOptions) {
|
||||
CoreCursor.apply(this, Array.prototype.slice.call(arguments, 0));
|
||||
var self = this;
|
||||
var state = AggregationCursor.INIT;
|
||||
var streamOptions = {};
|
||||
|
||||
// MaxTimeMS
|
||||
var maxTimeMS = null;
|
||||
|
||||
// Get the promiseLibrary
|
||||
var promiseLibrary = options.promiseLibrary;
|
||||
|
||||
// No promise library selected fall back
|
||||
if(!promiseLibrary) {
|
||||
promiseLibrary = typeof global.Promise == 'function' ?
|
||||
global.Promise : require('es6-promise').Promise;
|
||||
}
|
||||
|
||||
// Set up
|
||||
Readable.call(this, {objectMode: true});
|
||||
|
||||
// Internal state
|
||||
this.s = {
|
||||
// MaxTimeMS
|
||||
maxTimeMS: maxTimeMS
|
||||
// State
|
||||
, state: state
|
||||
// Stream options
|
||||
, streamOptions: streamOptions
|
||||
// BSON
|
||||
, bson: bson
|
||||
// Namespae
|
||||
, ns: ns
|
||||
// Command
|
||||
, cmd: cmd
|
||||
// Options
|
||||
, options: options
|
||||
// Topology
|
||||
, topology: topology
|
||||
// Topology Options
|
||||
, topologyOptions: topologyOptions
|
||||
// Promise library
|
||||
, promiseLibrary: promiseLibrary
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* AggregationCursor stream data event, fired for each document in the cursor.
|
||||
*
|
||||
* @event AggregationCursor#data
|
||||
* @type {object}
|
||||
*/
|
||||
|
||||
/**
|
||||
* AggregationCursor stream end event
|
||||
*
|
||||
* @event AggregationCursor#end
|
||||
* @type {null}
|
||||
*/
|
||||
|
||||
/**
|
||||
* AggregationCursor stream close event
|
||||
*
|
||||
* @event AggregationCursor#close
|
||||
* @type {null}
|
||||
*/
|
||||
|
||||
/**
|
||||
* AggregationCursor stream readable event
|
||||
*
|
||||
* @event AggregationCursor#readable
|
||||
* @type {null}
|
||||
*/
|
||||
|
||||
// Inherit from Readable
|
||||
inherits(AggregationCursor, Readable);
|
||||
|
||||
// Set the methods to inherit from prototype
|
||||
var methodsToInherit = ['_next', 'next', 'each', 'forEach', 'toArray'
|
||||
, 'rewind', 'bufferedCount', 'readBufferedDocuments', 'close', 'isClosed', 'kill'
|
||||
, '_find', '_getmore', '_killcursor', 'isDead', 'explain', 'isNotified'];
|
||||
|
||||
// Extend the Cursor
|
||||
for(var name in CoreCursor.prototype) {
|
||||
AggregationCursor.prototype[name] = CoreCursor.prototype[name];
|
||||
}
|
||||
|
||||
var define = AggregationCursor.define = new Define('AggregationCursor', AggregationCursor, true);
|
||||
|
||||
/**
|
||||
* Set the batch size for the cursor.
|
||||
* @method
|
||||
* @param {number} value The batchSize for the cursor.
|
||||
* @throws {MongoError}
|
||||
* @return {AggregationCursor}
|
||||
*/
|
||||
AggregationCursor.prototype.batchSize = function(value) {
|
||||
if(this.s.state == AggregationCursor.CLOSED || this.isDead()) throw MongoError.create({message: "Cursor is closed", driver:true });
|
||||
if(typeof value != 'number') throw MongoError.create({message: "batchSize requires an integer", drvier:true });
|
||||
if(this.s.cmd.cursor) this.s.cmd.cursor.batchSize = value;
|
||||
this.setCursorBatchSize(value);
|
||||
return this;
|
||||
}
|
||||
|
||||
define.classMethod('batchSize', {callback: false, promise:false, returns: [AggregationCursor]});
|
||||
|
||||
/**
|
||||
* Add a geoNear stage to the aggregation pipeline
|
||||
* @method
|
||||
* @param {object} document The geoNear stage document.
|
||||
* @return {AggregationCursor}
|
||||
*/
|
||||
AggregationCursor.prototype.geoNear = function(document) {
|
||||
this.s.cmd.pipeline.push({$geoNear: document});
|
||||
return this;
|
||||
}
|
||||
|
||||
define.classMethod('geoNear', {callback: false, promise:false, returns: [AggregationCursor]});
|
||||
|
||||
/**
|
||||
* Add a group stage to the aggregation pipeline
|
||||
* @method
|
||||
* @param {object} document The group stage document.
|
||||
* @return {AggregationCursor}
|
||||
*/
|
||||
AggregationCursor.prototype.group = function(document) {
|
||||
this.s.cmd.pipeline.push({$group: document});
|
||||
return this;
|
||||
}
|
||||
|
||||
define.classMethod('group', {callback: false, promise:false, returns: [AggregationCursor]});
|
||||
|
||||
/**
|
||||
* Add a limit stage to the aggregation pipeline
|
||||
* @method
|
||||
* @param {number} value The state limit value.
|
||||
* @return {AggregationCursor}
|
||||
*/
|
||||
AggregationCursor.prototype.limit = function(value) {
|
||||
this.s.cmd.pipeline.push({$limit: value});
|
||||
return this;
|
||||
}
|
||||
|
||||
define.classMethod('limit', {callback: false, promise:false, returns: [AggregationCursor]});
|
||||
|
||||
/**
|
||||
* Add a match stage to the aggregation pipeline
|
||||
* @method
|
||||
* @param {object} document The match stage document.
|
||||
* @return {AggregationCursor}
|
||||
*/
|
||||
AggregationCursor.prototype.match = function(document) {
|
||||
this.s.cmd.pipeline.push({$match: document});
|
||||
return this;
|
||||
}
|
||||
|
||||
define.classMethod('match', {callback: false, promise:false, returns: [AggregationCursor]});
|
||||
|
||||
/**
|
||||
* Add a maxTimeMS stage to the aggregation pipeline
|
||||
* @method
|
||||
* @param {number} value The state maxTimeMS value.
|
||||
* @return {AggregationCursor}
|
||||
*/
|
||||
AggregationCursor.prototype.maxTimeMS = function(value) {
|
||||
if(this.s.topology.lastIsMaster().minWireVersion > 2) {
|
||||
this.s.cmd.maxTimeMS = value;
|
||||
}
|
||||
return this;
|
||||
}
|
||||
|
||||
define.classMethod('maxTimeMS', {callback: false, promise:false, returns: [AggregationCursor]});
|
||||
|
||||
/**
|
||||
* Add a out stage to the aggregation pipeline
|
||||
* @method
|
||||
* @param {number} destination The destination name.
|
||||
* @return {AggregationCursor}
|
||||
*/
|
||||
AggregationCursor.prototype.out = function(destination) {
|
||||
this.s.cmd.pipeline.push({$out: destination});
|
||||
return this;
|
||||
}
|
||||
|
||||
define.classMethod('out', {callback: false, promise:false, returns: [AggregationCursor]});
|
||||
|
||||
/**
|
||||
* Add a project stage to the aggregation pipeline
|
||||
* @method
|
||||
* @param {object} document The project stage document.
|
||||
* @return {AggregationCursor}
|
||||
*/
|
||||
AggregationCursor.prototype.project = function(document) {
|
||||
this.s.cmd.pipeline.push({$project: document});
|
||||
return this;
|
||||
}
|
||||
|
||||
define.classMethod('project', {callback: false, promise:false, returns: [AggregationCursor]});
|
||||
|
||||
/**
|
||||
* Add a redact stage to the aggregation pipeline
|
||||
* @method
|
||||
* @param {object} document The redact stage document.
|
||||
* @return {AggregationCursor}
|
||||
*/
|
||||
AggregationCursor.prototype.redact = function(document) {
|
||||
this.s.cmd.pipeline.push({$redact: document});
|
||||
return this;
|
||||
}
|
||||
|
||||
define.classMethod('redact', {callback: false, promise:false, returns: [AggregationCursor]});
|
||||
|
||||
/**
|
||||
* Add a skip stage to the aggregation pipeline
|
||||
* @method
|
||||
* @param {number} value The state skip value.
|
||||
* @return {AggregationCursor}
|
||||
*/
|
||||
AggregationCursor.prototype.skip = function(value) {
|
||||
this.s.cmd.pipeline.push({$skip: value});
|
||||
return this;
|
||||
}
|
||||
|
||||
define.classMethod('skip', {callback: false, promise:false, returns: [AggregationCursor]});
|
||||
|
||||
/**
|
||||
* Add a sort stage to the aggregation pipeline
|
||||
* @method
|
||||
* @param {object} document The sort stage document.
|
||||
* @return {AggregationCursor}
|
||||
*/
|
||||
AggregationCursor.prototype.sort = function(document) {
|
||||
this.s.cmd.pipeline.push({$sort: document});
|
||||
return this;
|
||||
}
|
||||
|
||||
define.classMethod('sort', {callback: false, promise:false, returns: [AggregationCursor]});
|
||||
|
||||
/**
|
||||
* Add a unwind stage to the aggregation pipeline
|
||||
* @method
|
||||
* @param {number} field The unwind field name.
|
||||
* @return {AggregationCursor}
|
||||
*/
|
||||
AggregationCursor.prototype.unwind = function(field) {
|
||||
this.s.cmd.pipeline.push({$unwind: field});
|
||||
return this;
|
||||
}
|
||||
|
||||
define.classMethod('unwind', {callback: false, promise:false, returns: [AggregationCursor]});
|
||||
|
||||
AggregationCursor.prototype.get = AggregationCursor.prototype.toArray;
|
||||
|
||||
// Inherited methods
|
||||
define.classMethod('toArray', {callback: true, promise:true});
|
||||
define.classMethod('each', {callback: true, promise:false});
|
||||
define.classMethod('forEach', {callback: true, promise:false});
|
||||
define.classMethod('next', {callback: true, promise:true});
|
||||
define.classMethod('close', {callback: true, promise:true});
|
||||
define.classMethod('isClosed', {callback: false, promise:false, returns: [Boolean]});
|
||||
define.classMethod('rewind', {callback: false, promise:false});
|
||||
define.classMethod('bufferedCount', {callback: false, promise:false, returns: [Number]});
|
||||
define.classMethod('readBufferedDocuments', {callback: false, promise:false, returns: [Array]});
|
||||
|
||||
/**
|
||||
* Get the next available document from the cursor, returns null if no more documents are available.
|
||||
* @function AggregationCursor.prototype.next
|
||||
* @param {AggregationCursor~resultCallback} [callback] The result callback.
|
||||
* @throws {MongoError}
|
||||
* @return {Promise} returns Promise if no callback passed
|
||||
*/
|
||||
|
||||
/**
|
||||
* The callback format for results
|
||||
* @callback AggregationCursor~toArrayResultCallback
|
||||
* @param {MongoError} error An error instance representing the error during the execution.
|
||||
* @param {object[]} documents All the documents the satisfy the cursor.
|
||||
*/
|
||||
|
||||
/**
|
||||
* Returns an array of documents. The caller is responsible for making sure that there
|
||||
* is enough memory to store the results. Note that the array only contain partial
|
||||
* results when this cursor had been previouly accessed. In that case,
|
||||
* cursor.rewind() can be used to reset the cursor.
|
||||
* @method AggregationCursor.prototype.toArray
|
||||
* @param {AggregationCursor~toArrayResultCallback} [callback] The result callback.
|
||||
* @throws {MongoError}
|
||||
* @return {Promise} returns Promise if no callback passed
|
||||
*/
|
||||
|
||||
/**
|
||||
* The callback format for results
|
||||
* @callback AggregationCursor~resultCallback
|
||||
* @param {MongoError} error An error instance representing the error during the execution.
|
||||
* @param {(object|null)} result The result object if the command was executed successfully.
|
||||
*/
|
||||
|
||||
/**
|
||||
* Iterates over all the documents for this cursor. As with **{cursor.toArray}**,
|
||||
* not all of the elements will be iterated if this cursor had been previouly accessed.
|
||||
* In that case, **{cursor.rewind}** can be used to reset the cursor. However, unlike
|
||||
* **{cursor.toArray}**, the cursor will only hold a maximum of batch size elements
|
||||
* at any given time if batch size is specified. Otherwise, the caller is responsible
|
||||
* for making sure that the entire result can fit the memory.
|
||||
* @method AggregationCursor.prototype.each
|
||||
* @param {AggregationCursor~resultCallback} callback The result callback.
|
||||
* @throws {MongoError}
|
||||
* @return {null}
|
||||
*/
|
||||
|
||||
/**
|
||||
* Close the cursor, sending a AggregationCursor command and emitting close.
|
||||
* @method AggregationCursor.prototype.close
|
||||
* @param {AggregationCursor~resultCallback} [callback] The result callback.
|
||||
* @return {Promise} returns Promise if no callback passed
|
||||
*/
|
||||
|
||||
/**
|
||||
* Is the cursor closed
|
||||
* @method AggregationCursor.prototype.isClosed
|
||||
* @return {boolean}
|
||||
*/
|
||||
|
||||
/**
|
||||
* Execute the explain for the cursor
|
||||
* @method AggregationCursor.prototype.explain
|
||||
* @param {AggregationCursor~resultCallback} [callback] The result callback.
|
||||
* @return {Promise} returns Promise if no callback passed
|
||||
*/
|
||||
|
||||
/**
|
||||
* Clone the cursor
|
||||
* @function AggregationCursor.prototype.clone
|
||||
* @return {AggregationCursor}
|
||||
*/
|
||||
|
||||
/**
|
||||
* Resets the cursor
|
||||
* @function AggregationCursor.prototype.rewind
|
||||
* @return {AggregationCursor}
|
||||
*/
|
||||
|
||||
/**
|
||||
* The callback format for the forEach iterator method
|
||||
* @callback AggregationCursor~iteratorCallback
|
||||
* @param {Object} doc An emitted document for the iterator
|
||||
*/
|
||||
|
||||
/**
|
||||
* The callback error format for the forEach iterator method
|
||||
* @callback AggregationCursor~endCallback
|
||||
* @param {MongoError} error An error instance representing the error during the execution.
|
||||
*/
|
||||
|
||||
/*
|
||||
* Iterates over all the documents for this cursor using the iterator, callback pattern.
|
||||
* @method AggregationCursor.prototype.forEach
|
||||
* @param {AggregationCursor~iteratorCallback} iterator The iteration callback.
|
||||
* @param {AggregationCursor~endCallback} callback The end callback.
|
||||
* @throws {MongoError}
|
||||
* @return {null}
|
||||
*/
|
||||
|
||||
AggregationCursor.INIT = 0;
|
||||
AggregationCursor.OPEN = 1;
|
||||
AggregationCursor.CLOSED = 2;
|
||||
|
||||
module.exports = AggregationCursor;
|
||||
608
node_modules/mongodb/lib/apm.js
generated
vendored
Normal file
608
node_modules/mongodb/lib/apm.js
generated
vendored
Normal file
@@ -0,0 +1,608 @@
|
||||
var EventEmitter = require('events').EventEmitter,
|
||||
inherits = require('util').inherits;
|
||||
|
||||
// Get prototypes
|
||||
var AggregationCursor = require('./aggregation_cursor'),
|
||||
CommandCursor = require('./command_cursor'),
|
||||
OrderedBulkOperation = require('./bulk/ordered').OrderedBulkOperation,
|
||||
UnorderedBulkOperation = require('./bulk/unordered').UnorderedBulkOperation,
|
||||
GridStore = require('./gridfs/grid_store'),
|
||||
Server = require('./server'),
|
||||
ReplSet = require('./replset'),
|
||||
Mongos = require('./mongos'),
|
||||
Cursor = require('./cursor'),
|
||||
Collection = require('./collection'),
|
||||
Db = require('./db'),
|
||||
Admin = require('./admin');
|
||||
|
||||
var basicOperationIdGenerator = {
|
||||
operationId: 1,
|
||||
|
||||
next: function() {
|
||||
return this.operationId++;
|
||||
}
|
||||
}
|
||||
|
||||
var basicTimestampGenerator = {
|
||||
current: function() {
|
||||
return new Date().getTime();
|
||||
},
|
||||
|
||||
duration: function(start, end) {
|
||||
return end - start;
|
||||
}
|
||||
}
|
||||
|
||||
var senstiveCommands = ['authenticate', 'saslStart', 'saslContinue', 'getnonce',
|
||||
'createUser', 'updateUser', 'copydbgetnonce', 'copydbsaslstart', 'copydb'];
|
||||
|
||||
var Instrumentation = function(core, options, callback) {
|
||||
options = options || {};
|
||||
|
||||
// Optional id generators
|
||||
var operationIdGenerator = options.operationIdGenerator || basicOperationIdGenerator;
|
||||
// Optional timestamp generator
|
||||
var timestampGenerator = options.timestampGenerator || basicTimestampGenerator;
|
||||
// Extend with event emitter functionality
|
||||
EventEmitter.call(this);
|
||||
|
||||
// Contains all the instrumentation overloads
|
||||
this.overloads = [];
|
||||
|
||||
// ---------------------------------------------------------
|
||||
//
|
||||
// Instrument prototype
|
||||
//
|
||||
// ---------------------------------------------------------
|
||||
|
||||
var instrumentPrototype = function(callback) {
|
||||
var instrumentations = []
|
||||
|
||||
// Classes to support
|
||||
var classes = [GridStore, OrderedBulkOperation, UnorderedBulkOperation,
|
||||
CommandCursor, AggregationCursor, Cursor, Collection, Db];
|
||||
|
||||
// Add instrumentations to the available list
|
||||
for(var i = 0; i < classes.length; i++) {
|
||||
if(classes[i].define) {
|
||||
instrumentations.push(classes[i].define.generate());
|
||||
}
|
||||
}
|
||||
|
||||
// Return the list of instrumentation points
|
||||
callback(null, instrumentations);
|
||||
}
|
||||
|
||||
// Did the user want to instrument the prototype
|
||||
if(typeof callback == 'function') {
|
||||
instrumentPrototype(callback);
|
||||
}
|
||||
|
||||
// ---------------------------------------------------------
|
||||
//
|
||||
// Server
|
||||
//
|
||||
// ---------------------------------------------------------
|
||||
|
||||
// Reference
|
||||
var self = this;
|
||||
// Names of methods we need to wrap
|
||||
var methods = ['command', 'insert', 'update', 'remove'];
|
||||
// Prototype
|
||||
var proto = core.Server.prototype;
|
||||
// Core server method we are going to wrap
|
||||
methods.forEach(function(x) {
|
||||
var func = proto[x];
|
||||
|
||||
// Add to overloaded methods
|
||||
self.overloads.push({proto: proto, name:x, func:func});
|
||||
|
||||
// The actual prototype
|
||||
proto[x] = function() {
|
||||
var requestId = core.Query.nextRequestId();
|
||||
// Get the aruments
|
||||
var args = Array.prototype.slice.call(arguments, 0);
|
||||
var ns = args[0];
|
||||
var commandObj = args[1];
|
||||
var options = args[2] || {};
|
||||
var keys = Object.keys(commandObj);
|
||||
var commandName = keys[0];
|
||||
var db = ns.split('.')[0];
|
||||
|
||||
// Do we have a legacy insert/update/remove command
|
||||
if(x == 'insert' && !this.lastIsMaster().maxWireVersion) {
|
||||
commandName = 'insert';
|
||||
// Get the collection
|
||||
var col = ns.split('.');
|
||||
col.shift();
|
||||
col = col.join('.');
|
||||
|
||||
// Re-write the command
|
||||
commandObj = {
|
||||
insert: col, documents: commandObj
|
||||
}
|
||||
|
||||
if(options.writeConcern && Object.keys(options.writeConcern).length > 0) {
|
||||
commandObj.writeConcern = options.writeConcern;
|
||||
}
|
||||
|
||||
commandObj.ordered = options.ordered != undefined ? options.ordered : true;
|
||||
} else if(x == 'update' && !this.lastIsMaster().maxWireVersion) {
|
||||
commandName = 'update';
|
||||
|
||||
// Get the collection
|
||||
var col = ns.split('.');
|
||||
col.shift();
|
||||
col = col.join('.');
|
||||
|
||||
// Re-write the command
|
||||
commandObj = {
|
||||
update: col, updates: commandObj
|
||||
}
|
||||
|
||||
if(options.writeConcern && Object.keys(options.writeConcern).length > 0) {
|
||||
commandObj.writeConcern = options.writeConcern;
|
||||
}
|
||||
|
||||
commandObj.ordered = options.ordered != undefined ? options.ordered : true;
|
||||
} else if(x == 'remove' && !this.lastIsMaster().maxWireVersion) {
|
||||
commandName = 'delete';
|
||||
|
||||
// Get the collection
|
||||
var col = ns.split('.');
|
||||
col.shift();
|
||||
col = col.join('.');
|
||||
|
||||
// Re-write the command
|
||||
commandObj = {
|
||||
delete: col, deletes: commandObj
|
||||
}
|
||||
|
||||
if(options.writeConcern && Object.keys(options.writeConcern).length > 0) {
|
||||
commandObj.writeConcern = options.writeConcern;
|
||||
}
|
||||
|
||||
commandObj.ordered = options.ordered != undefined ? options.ordered : true;
|
||||
} else if(x == 'insert' || x == 'update' || x == 'remove' && this.lastIsMaster().maxWireVersion >= 2) {
|
||||
// Skip the insert/update/remove commands as they are executed as actual write commands in 2.6 or higher
|
||||
return func.apply(this, args);
|
||||
}
|
||||
|
||||
// Get the callback
|
||||
var callback = args.pop();
|
||||
// Set current callback operation id from the current context or create
|
||||
// a new one
|
||||
var ourOpId = callback.operationId || operationIdGenerator.next();
|
||||
|
||||
// Get a connection reference for this server instance
|
||||
var connection = this.s.pool.get()
|
||||
|
||||
// Emit the start event for the command
|
||||
var command = {
|
||||
// Returns the command.
|
||||
command: commandObj,
|
||||
// Returns the database name.
|
||||
databaseName: db,
|
||||
// Returns the command name.
|
||||
commandName: commandName,
|
||||
// Returns the driver generated request id.
|
||||
requestId: requestId,
|
||||
// Returns the driver generated operation id.
|
||||
// This is used to link events together such as bulk write operations. OPTIONAL.
|
||||
operationId: ourOpId,
|
||||
// Returns the connection id for the command. For languages that do not have this,
|
||||
// this MUST return the driver equivalent which MUST include the server address and port.
|
||||
// The name of this field is flexible to match the object that is returned from the driver.
|
||||
connectionId: connection
|
||||
};
|
||||
|
||||
// Filter out any sensitive commands
|
||||
if(senstiveCommands.indexOf(commandName.toLowerCase())) {
|
||||
command.commandObj = {};
|
||||
command.commandObj[commandName] = true;
|
||||
}
|
||||
|
||||
// Emit the started event
|
||||
self.emit('started', command)
|
||||
|
||||
// Start time
|
||||
var startTime = timestampGenerator.current();
|
||||
|
||||
// Push our handler callback
|
||||
args.push(function(err, r) {
|
||||
var endTime = timestampGenerator.current();
|
||||
var command = {
|
||||
duration: timestampGenerator.duration(startTime, endTime),
|
||||
commandName: commandName,
|
||||
requestId: requestId,
|
||||
operationId: ourOpId,
|
||||
connectionId: connection
|
||||
};
|
||||
|
||||
// If we have an error
|
||||
if(err || (r && r.result && r.result.ok == 0)) {
|
||||
command.failure = err || r.result.writeErrors || r.result;
|
||||
|
||||
// Filter out any sensitive commands
|
||||
if(senstiveCommands.indexOf(commandName.toLowerCase())) {
|
||||
command.failure = {};
|
||||
}
|
||||
|
||||
self.emit('failed', command);
|
||||
} else if(commandObj && commandObj.writeConcern
|
||||
&& commandObj.writeConcern.w == 0) {
|
||||
// If we have write concern 0
|
||||
command.reply = {ok:1};
|
||||
self.emit('succeeded', command);
|
||||
} else {
|
||||
command.reply = r && r.result ? r.result : r;
|
||||
|
||||
// Filter out any sensitive commands
|
||||
if(senstiveCommands.indexOf(commandName.toLowerCase()) != -1) {
|
||||
command.reply = {};
|
||||
}
|
||||
|
||||
self.emit('succeeded', command);
|
||||
}
|
||||
|
||||
// Return to caller
|
||||
callback(err, r);
|
||||
});
|
||||
|
||||
// Apply the call
|
||||
func.apply(this, args);
|
||||
}
|
||||
});
|
||||
|
||||
// ---------------------------------------------------------
|
||||
//
|
||||
// Bulk Operations
|
||||
//
|
||||
// ---------------------------------------------------------
|
||||
|
||||
// Inject ourselves into the Bulk methods
|
||||
var methods = ['execute'];
|
||||
var prototypes = [
|
||||
require('./bulk/ordered').Bulk.prototype,
|
||||
require('./bulk/unordered').Bulk.prototype
|
||||
]
|
||||
|
||||
prototypes.forEach(function(proto) {
|
||||
// Core server method we are going to wrap
|
||||
methods.forEach(function(x) {
|
||||
var func = proto[x];
|
||||
|
||||
// Add to overloaded methods
|
||||
self.overloads.push({proto: proto, name:x, func:func});
|
||||
|
||||
// The actual prototype
|
||||
proto[x] = function() {
|
||||
var bulk = this;
|
||||
// Get the aruments
|
||||
var args = Array.prototype.slice.call(arguments, 0);
|
||||
// Set an operation Id on the bulk object
|
||||
this.operationId = operationIdGenerator.next();
|
||||
|
||||
// Get the callback
|
||||
var callback = args.pop();
|
||||
// If we have a callback use this
|
||||
if(typeof callback == 'function') {
|
||||
args.push(function(err, r) {
|
||||
// Return to caller
|
||||
callback(err, r);
|
||||
});
|
||||
|
||||
// Apply the call
|
||||
func.apply(this, args);
|
||||
} else {
|
||||
return func.apply(this, args);
|
||||
}
|
||||
}
|
||||
});
|
||||
});
|
||||
|
||||
// ---------------------------------------------------------
|
||||
//
|
||||
// Cursor
|
||||
//
|
||||
// ---------------------------------------------------------
|
||||
|
||||
// Inject ourselves into the Cursor methods
|
||||
var methods = ['_find', '_getmore', '_killcursor'];
|
||||
var prototypes = [
|
||||
require('./cursor').prototype,
|
||||
require('./command_cursor').prototype,
|
||||
require('./aggregation_cursor').prototype
|
||||
]
|
||||
|
||||
// Command name translation
|
||||
var commandTranslation = {
|
||||
'_find': 'find', '_getmore': 'getMore', '_killcursor': 'killCursors', '_explain': 'explain'
|
||||
}
|
||||
|
||||
prototypes.forEach(function(proto) {
|
||||
|
||||
// Core server method we are going to wrap
|
||||
methods.forEach(function(x) {
|
||||
var func = proto[x];
|
||||
|
||||
// Add to overloaded methods
|
||||
self.overloads.push({proto: proto, name:x, func:func});
|
||||
|
||||
// The actual prototype
|
||||
proto[x] = function() {
|
||||
var cursor = this;
|
||||
var requestId = core.Query.nextRequestId();
|
||||
var ourOpId = operationIdGenerator.next();
|
||||
var parts = this.ns.split('.');
|
||||
var db = parts[0];
|
||||
|
||||
// Get the collection
|
||||
parts.shift();
|
||||
var collection = parts.join('.');
|
||||
|
||||
// Set the command
|
||||
var command = this.query;
|
||||
var cmd = this.s.cmd;
|
||||
|
||||
// If we have a find method, set the operationId on the cursor
|
||||
if(x == '_find') {
|
||||
cursor.operationId = ourOpId;
|
||||
}
|
||||
|
||||
// Do we have a find command rewrite it
|
||||
if(x == '_getmore') {
|
||||
command = {
|
||||
getMore: this.cursorState.cursorId,
|
||||
collection: collection,
|
||||
batchSize: cmd.batchSize
|
||||
}
|
||||
|
||||
if(cmd.maxTimeMS) command.maxTimeMS = cmd.maxTimeMS;
|
||||
} else if(x == '_killcursors') {
|
||||
command = {
|
||||
killCursors: collection,
|
||||
cursors: [this.cursorState.cursorId]
|
||||
}
|
||||
} else if(cmd.find) {
|
||||
command = {
|
||||
find: collection, filter: cmd.query
|
||||
}
|
||||
|
||||
if(cmd.sort) command.sort = cmd.sort;
|
||||
if(cmd.fields) command.projection = cmd.fields;
|
||||
if(cmd.limit && cmd.limit < 0) {
|
||||
command.limit = Math.abs(cmd.limit);
|
||||
command.singleBatch = true;
|
||||
} else if(cmd.limit) {
|
||||
command.limit = Math.abs(cmd.limit);
|
||||
}
|
||||
|
||||
// Options
|
||||
if(cmd.skip) command.skip = cmd.skip;
|
||||
if(cmd.hint) command.hint = cmd.hint;
|
||||
if(cmd.batchSize) command.batchSize = cmd.batchSize;
|
||||
if(typeof cmd.returnKey == 'boolean') command.returnKey = cmd.returnKey;
|
||||
if(cmd.comment) command.comment = cmd.comment;
|
||||
if(cmd.min) command.min = cmd.min;
|
||||
if(cmd.max) command.max = cmd.max;
|
||||
if(cmd.maxScan) command.maxScan = cmd.maxScan;
|
||||
if(cmd.maxTimeMS) command.maxTimeMS = cmd.maxTimeMS;
|
||||
|
||||
// Flags
|
||||
if(typeof cmd.awaitData == 'boolean') command.awaitData = cmd.awaitData;
|
||||
if(typeof cmd.snapshot == 'boolean') command.snapshot = cmd.snapshot;
|
||||
if(typeof cmd.tailable == 'boolean') command.tailable = cmd.tailable;
|
||||
if(typeof cmd.oplogReplay == 'boolean') command.oplogReplay = cmd.oplogReplay;
|
||||
if(typeof cmd.noCursorTimeout == 'boolean') command.noCursorTimeout = cmd.noCursorTimeout;
|
||||
if(typeof cmd.partial == 'boolean') command.partial = cmd.partial;
|
||||
if(typeof cmd.showDiskLoc == 'boolean') command.showRecordId = cmd.showDiskLoc;
|
||||
|
||||
// Read Concern
|
||||
if(cmd.readConcern) command.readConcern = cmd.readConcern;
|
||||
|
||||
// Override method
|
||||
if(cmd.explain) command.explain = cmd.explain;
|
||||
if(cmd.exhaust) command.exhaust = cmd.exhaust;
|
||||
|
||||
// If we have a explain flag
|
||||
if(cmd.explain) {
|
||||
// Create fake explain command
|
||||
command = {
|
||||
explain: command,
|
||||
verbosity: 'allPlansExecution'
|
||||
}
|
||||
|
||||
// Set readConcern on the command if available
|
||||
if(cmd.readConcern) command.readConcern = cmd.readConcern
|
||||
|
||||
// Set up the _explain name for the command
|
||||
x = '_explain';
|
||||
}
|
||||
} else {
|
||||
command = cmd;
|
||||
}
|
||||
|
||||
// Set up the connection
|
||||
var connectionId = null;
|
||||
|
||||
// Set local connection
|
||||
if(this.connection) connectionId = this.connection;
|
||||
if(!connectionId && this.server && this.server.getConnection) connectionId = this.server.getConnection();
|
||||
|
||||
// Get the command Name
|
||||
var commandName = x == '_find' ? Object.keys(command)[0] : commandTranslation[x];
|
||||
|
||||
// Emit the start event for the command
|
||||
var command = {
|
||||
// Returns the command.
|
||||
command: command,
|
||||
// Returns the database name.
|
||||
databaseName: db,
|
||||
// Returns the command name.
|
||||
commandName: commandName,
|
||||
// Returns the driver generated request id.
|
||||
requestId: requestId,
|
||||
// Returns the driver generated operation id.
|
||||
// This is used to link events together such as bulk write operations. OPTIONAL.
|
||||
operationId: this.operationId,
|
||||
// Returns the connection id for the command. For languages that do not have this,
|
||||
// this MUST return the driver equivalent which MUST include the server address and port.
|
||||
// The name of this field is flexible to match the object that is returned from the driver.
|
||||
connectionId: connectionId
|
||||
};
|
||||
|
||||
// Get the aruments
|
||||
var args = Array.prototype.slice.call(arguments, 0);
|
||||
|
||||
// Get the callback
|
||||
var callback = args.pop();
|
||||
|
||||
// We do not have a callback but a Promise
|
||||
if(typeof callback == 'function' || command.commandName == 'killCursors') {
|
||||
var startTime = timestampGenerator.current();
|
||||
// Emit the started event
|
||||
self.emit('started', command)
|
||||
|
||||
// Emit succeeded event with killcursor if we have a legacy protocol
|
||||
if(command.commandName == 'killCursors'
|
||||
&& this.server.lastIsMaster()
|
||||
&& this.server.lastIsMaster().maxWireVersion < 4) {
|
||||
// Emit the succeeded command
|
||||
var command = {
|
||||
duration: timestampGenerator.duration(startTime, timestampGenerator.current()),
|
||||
commandName: commandName,
|
||||
requestId: requestId,
|
||||
operationId: cursor.operationId,
|
||||
connectionId: cursor.server.getConnection(),
|
||||
reply: [{ok:1}]
|
||||
};
|
||||
|
||||
// Emit the command
|
||||
return self.emit('succeeded', command)
|
||||
}
|
||||
|
||||
// Add our callback handler
|
||||
args.push(function(err, r) {
|
||||
if(err) {
|
||||
// Command
|
||||
var command = {
|
||||
duration: timestampGenerator.duration(startTime, timestampGenerator.current()),
|
||||
commandName: commandName,
|
||||
requestId: requestId,
|
||||
operationId: ourOpId,
|
||||
connectionId: cursor.server.getConnection(),
|
||||
failure: err };
|
||||
|
||||
// Emit the command
|
||||
self.emit('failed', command)
|
||||
} else {
|
||||
|
||||
// Do we have a getMore
|
||||
if(commandName.toLowerCase() == 'getmore' && r == null) {
|
||||
r = {
|
||||
cursor: {
|
||||
id: cursor.cursorState.cursorId,
|
||||
ns: cursor.ns,
|
||||
nextBatch: cursor.cursorState.documents
|
||||
}, ok:1
|
||||
}
|
||||
} else if(commandName.toLowerCase() == 'find' && r == null) {
|
||||
r = {
|
||||
cursor: {
|
||||
id: cursor.cursorState.cursorId,
|
||||
ns: cursor.ns,
|
||||
firstBatch: cursor.cursorState.documents
|
||||
}, ok:1
|
||||
}
|
||||
} else if(commandName.toLowerCase() == 'killcursors' && r == null) {
|
||||
r = {
|
||||
cursorsUnknown:[cursor.cursorState.lastCursorId],
|
||||
ok:1
|
||||
}
|
||||
}
|
||||
|
||||
// cursor id is zero, we can issue success command
|
||||
var command = {
|
||||
duration: timestampGenerator.duration(startTime, timestampGenerator.current()),
|
||||
commandName: commandName,
|
||||
requestId: requestId,
|
||||
operationId: cursor.operationId,
|
||||
connectionId: cursor.server.getConnection(),
|
||||
reply: r && r.result ? r.result : r
|
||||
};
|
||||
|
||||
// Emit the command
|
||||
self.emit('succeeded', command)
|
||||
}
|
||||
|
||||
// Return
|
||||
if(!callback) return;
|
||||
|
||||
// Return to caller
|
||||
callback(err, r);
|
||||
});
|
||||
|
||||
// Apply the call
|
||||
func.apply(this, args);
|
||||
} else {
|
||||
// Assume promise, push back the missing value
|
||||
args.push(callback);
|
||||
// Get the promise
|
||||
var promise = func.apply(this, args);
|
||||
// Return a new promise
|
||||
return new cursor.s.promiseLibrary(function(resolve, reject) {
|
||||
var startTime = timestampGenerator.current();
|
||||
// Emit the started event
|
||||
self.emit('started', command)
|
||||
// Execute the function
|
||||
promise.then(function(r) {
|
||||
// cursor id is zero, we can issue success command
|
||||
var command = {
|
||||
duration: timestampGenerator.duration(startTime, timestampGenerator.current()),
|
||||
commandName: commandName,
|
||||
requestId: requestId,
|
||||
operationId: cursor.operationId,
|
||||
connectionId: cursor.server.getConnection(),
|
||||
reply: cursor.cursorState.documents
|
||||
};
|
||||
|
||||
// Emit the command
|
||||
self.emit('succeeded', command)
|
||||
}).catch(function(err) {
|
||||
// Command
|
||||
var command = {
|
||||
duration: timestampGenerator.duration(startTime, timestampGenerator.current()),
|
||||
commandName: commandName,
|
||||
requestId: requestId,
|
||||
operationId: ourOpId,
|
||||
connectionId: cursor.server.getConnection(),
|
||||
failure: err };
|
||||
|
||||
// Emit the command
|
||||
self.emit('failed', command)
|
||||
// reject the promise
|
||||
reject(err);
|
||||
});
|
||||
});
|
||||
}
|
||||
}
|
||||
});
|
||||
});
|
||||
}
|
||||
|
||||
inherits(Instrumentation, EventEmitter);
|
||||
|
||||
Instrumentation.prototype.uninstrument = function() {
|
||||
for(var i = 0; i < this.overloads.length; i++) {
|
||||
var obj = this.overloads[i];
|
||||
obj.proto[obj.name] = obj.func;
|
||||
}
|
||||
|
||||
// Remove all listeners
|
||||
this.removeAllListeners('started');
|
||||
this.removeAllListeners('succeeded');
|
||||
this.removeAllListeners('failed');
|
||||
}
|
||||
|
||||
module.exports = Instrumentation;
|
||||
393
node_modules/mongodb/lib/bulk/common.js
generated
vendored
Normal file
393
node_modules/mongodb/lib/bulk/common.js
generated
vendored
Normal file
@@ -0,0 +1,393 @@
|
||||
"use strict";
|
||||
|
||||
var utils = require('../utils');
|
||||
|
||||
// Error codes
|
||||
var UNKNOWN_ERROR = 8;
|
||||
var INVALID_BSON_ERROR = 22;
|
||||
var WRITE_CONCERN_ERROR = 64;
|
||||
var MULTIPLE_ERROR = 65;
|
||||
|
||||
// Insert types
|
||||
var INSERT = 1;
|
||||
var UPDATE = 2;
|
||||
var REMOVE = 3
|
||||
|
||||
|
||||
// Get write concern
|
||||
var writeConcern = function(target, col, options) {
|
||||
if(options.w != null || options.j != null || options.fsync != null) {
|
||||
target.writeConcern = options;
|
||||
} else if(col.writeConcern.w != null || col.writeConcern.j != null || col.writeConcern.fsync != null) {
|
||||
target.writeConcern = col.writeConcern;
|
||||
}
|
||||
|
||||
return target
|
||||
}
|
||||
|
||||
/**
|
||||
* Helper function to define properties
|
||||
* @ignore
|
||||
*/
|
||||
var defineReadOnlyProperty = function(self, name, value) {
|
||||
Object.defineProperty(self, name, {
|
||||
enumerable: true
|
||||
, get: function() {
|
||||
return value;
|
||||
}
|
||||
});
|
||||
}
|
||||
|
||||
/**
|
||||
* Keeps the state of a unordered batch so we can rewrite the results
|
||||
* correctly after command execution
|
||||
* @ignore
|
||||
*/
|
||||
var Batch = function(batchType, originalZeroIndex) {
|
||||
this.originalZeroIndex = originalZeroIndex;
|
||||
this.currentIndex = 0;
|
||||
this.originalIndexes = [];
|
||||
this.batchType = batchType;
|
||||
this.operations = [];
|
||||
this.size = 0;
|
||||
this.sizeBytes = 0;
|
||||
}
|
||||
|
||||
/**
|
||||
* Wraps a legacy operation so we can correctly rewrite it's error
|
||||
* @ignore
|
||||
*/
|
||||
var LegacyOp = function(batchType, operation, index) {
|
||||
this.batchType = batchType;
|
||||
this.index = index;
|
||||
this.operation = operation;
|
||||
}
|
||||
|
||||
/**
|
||||
* Create a new BulkWriteResult instance (INTERNAL TYPE, do not instantiate directly)
|
||||
*
|
||||
* @class
|
||||
* @property {boolean} ok Did bulk operation correctly execute
|
||||
* @property {number} nInserted number of inserted documents
|
||||
* @property {number} nUpdated number of documents updated logically
|
||||
* @property {number} nUpserted Number of upserted documents
|
||||
* @property {number} nModified Number of documents updated physically on disk
|
||||
* @property {number} nRemoved Number of removed documents
|
||||
* @return {BulkWriteResult} a BulkWriteResult instance
|
||||
*/
|
||||
var BulkWriteResult = function(bulkResult) {
|
||||
defineReadOnlyProperty(this, "ok", bulkResult.ok);
|
||||
defineReadOnlyProperty(this, "nInserted", bulkResult.nInserted);
|
||||
defineReadOnlyProperty(this, "nUpserted", bulkResult.nUpserted);
|
||||
defineReadOnlyProperty(this, "nMatched", bulkResult.nMatched);
|
||||
defineReadOnlyProperty(this, "nModified", bulkResult.nModified);
|
||||
defineReadOnlyProperty(this, "nRemoved", bulkResult.nRemoved);
|
||||
|
||||
/**
|
||||
* Return an array of inserted ids
|
||||
*
|
||||
* @return {object[]}
|
||||
*/
|
||||
this.getInsertedIds = function() {
|
||||
return bulkResult.insertedIds;
|
||||
}
|
||||
|
||||
/**
|
||||
* Return an array of upserted ids
|
||||
*
|
||||
* @return {object[]}
|
||||
*/
|
||||
this.getUpsertedIds = function() {
|
||||
return bulkResult.upserted;
|
||||
}
|
||||
|
||||
/**
|
||||
* Return the upserted id at position x
|
||||
*
|
||||
* @param {number} index the number of the upserted id to return, returns undefined if no result for passed in index
|
||||
* @return {object}
|
||||
*/
|
||||
this.getUpsertedIdAt = function(index) {
|
||||
return bulkResult.upserted[index];
|
||||
}
|
||||
|
||||
/**
|
||||
* Return raw internal result
|
||||
*
|
||||
* @return {object}
|
||||
*/
|
||||
this.getRawResponse = function() {
|
||||
return bulkResult;
|
||||
}
|
||||
|
||||
/**
|
||||
* Returns true if the bulk operation contains a write error
|
||||
*
|
||||
* @return {boolean}
|
||||
*/
|
||||
this.hasWriteErrors = function() {
|
||||
return bulkResult.writeErrors.length > 0;
|
||||
}
|
||||
|
||||
/**
|
||||
* Returns the number of write errors off the bulk operation
|
||||
*
|
||||
* @return {number}
|
||||
*/
|
||||
this.getWriteErrorCount = function() {
|
||||
return bulkResult.writeErrors.length;
|
||||
}
|
||||
|
||||
/**
|
||||
* Returns a specific write error object
|
||||
*
|
||||
* @return {WriteError}
|
||||
*/
|
||||
this.getWriteErrorAt = function(index) {
|
||||
if(index < bulkResult.writeErrors.length) {
|
||||
return bulkResult.writeErrors[index];
|
||||
}
|
||||
return null;
|
||||
}
|
||||
|
||||
/**
|
||||
* Retrieve all write errors
|
||||
*
|
||||
* @return {object[]}
|
||||
*/
|
||||
this.getWriteErrors = function() {
|
||||
return bulkResult.writeErrors;
|
||||
}
|
||||
|
||||
/**
|
||||
* Retrieve lastOp if available
|
||||
*
|
||||
* @return {object}
|
||||
*/
|
||||
this.getLastOp = function() {
|
||||
return bulkResult.lastOp;
|
||||
}
|
||||
|
||||
/**
|
||||
* Retrieve the write concern error if any
|
||||
*
|
||||
* @return {WriteConcernError}
|
||||
*/
|
||||
this.getWriteConcernError = function() {
|
||||
if(bulkResult.writeConcernErrors.length == 0) {
|
||||
return null;
|
||||
} else if(bulkResult.writeConcernErrors.length == 1) {
|
||||
// Return the error
|
||||
return bulkResult.writeConcernErrors[0];
|
||||
} else {
|
||||
|
||||
// Combine the errors
|
||||
var errmsg = "";
|
||||
for(var i = 0; i < bulkResult.writeConcernErrors.length; i++) {
|
||||
var err = bulkResult.writeConcernErrors[i];
|
||||
errmsg = errmsg + err.errmsg;
|
||||
|
||||
// TODO: Something better
|
||||
if(i == 0) errmsg = errmsg + " and ";
|
||||
}
|
||||
|
||||
return new WriteConcernError({ errmsg : errmsg, code : WRITE_CONCERN_ERROR });
|
||||
}
|
||||
}
|
||||
|
||||
this.toJSON = function() {
|
||||
return bulkResult;
|
||||
}
|
||||
|
||||
this.toString = function() {
|
||||
return "BulkWriteResult(" + this.toJSON(bulkResult) + ")";
|
||||
}
|
||||
|
||||
this.isOk = function() {
|
||||
return bulkResult.ok == 1;
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Create a new WriteConcernError instance (INTERNAL TYPE, do not instantiate directly)
|
||||
*
|
||||
* @class
|
||||
* @property {number} code Write concern error code.
|
||||
* @property {string} errmsg Write concern error message.
|
||||
* @return {WriteConcernError} a WriteConcernError instance
|
||||
*/
|
||||
var WriteConcernError = function(err) {
|
||||
if(!(this instanceof WriteConcernError)) return new WriteConcernError(err);
|
||||
|
||||
// Define properties
|
||||
defineReadOnlyProperty(this, "code", err.code);
|
||||
defineReadOnlyProperty(this, "errmsg", err.errmsg);
|
||||
|
||||
this.toJSON = function() {
|
||||
return {code: err.code, errmsg: err.errmsg};
|
||||
}
|
||||
|
||||
this.toString = function() {
|
||||
return "WriteConcernError(" + err.errmsg + ")";
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Create a new WriteError instance (INTERNAL TYPE, do not instantiate directly)
|
||||
*
|
||||
* @class
|
||||
* @property {number} code Write concern error code.
|
||||
* @property {number} index Write concern error original bulk operation index.
|
||||
* @property {string} errmsg Write concern error message.
|
||||
* @return {WriteConcernError} a WriteConcernError instance
|
||||
*/
|
||||
var WriteError = function(err) {
|
||||
if(!(this instanceof WriteError)) return new WriteError(err);
|
||||
|
||||
// Define properties
|
||||
defineReadOnlyProperty(this, "code", err.code);
|
||||
defineReadOnlyProperty(this, "index", err.index);
|
||||
defineReadOnlyProperty(this, "errmsg", err.errmsg);
|
||||
|
||||
//
|
||||
// Define access methods
|
||||
this.getOperation = function() {
|
||||
return err.op;
|
||||
}
|
||||
|
||||
this.toJSON = function() {
|
||||
return {code: err.code, index: err.index, errmsg: err.errmsg, op: err.op};
|
||||
}
|
||||
|
||||
this.toString = function() {
|
||||
return "WriteError(" + JSON.stringify(this.toJSON()) + ")";
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* Merges results into shared data structure
|
||||
* @ignore
|
||||
*/
|
||||
var mergeBatchResults = function(ordered, batch, bulkResult, err, result) {
|
||||
// If we have an error set the result to be the err object
|
||||
if(err) {
|
||||
result = err;
|
||||
} else if(result && result.result) {
|
||||
result = result.result;
|
||||
} else if(result == null) {
|
||||
return;
|
||||
}
|
||||
|
||||
// Do we have a top level error stop processing and return
|
||||
if(result.ok == 0 && bulkResult.ok == 1) {
|
||||
bulkResult.ok = 0;
|
||||
// bulkResult.error = utils.toError(result);
|
||||
var writeError = {
|
||||
index: 0
|
||||
, code: result.code || 0
|
||||
, errmsg: result.message
|
||||
, op: batch.operations[0]
|
||||
};
|
||||
|
||||
bulkResult.writeErrors.push(new WriteError(writeError));
|
||||
return;
|
||||
} else if(result.ok == 0 && bulkResult.ok == 0) {
|
||||
return;
|
||||
}
|
||||
|
||||
// Add lastop if available
|
||||
if(result.lastOp) {
|
||||
bulkResult.lastOp = result.lastOp;
|
||||
}
|
||||
|
||||
// If we have an insert Batch type
|
||||
if(batch.batchType == INSERT && result.n) {
|
||||
bulkResult.nInserted = bulkResult.nInserted + result.n;
|
||||
}
|
||||
|
||||
// If we have an insert Batch type
|
||||
if(batch.batchType == REMOVE && result.n) {
|
||||
bulkResult.nRemoved = bulkResult.nRemoved + result.n;
|
||||
}
|
||||
|
||||
var nUpserted = 0;
|
||||
|
||||
// We have an array of upserted values, we need to rewrite the indexes
|
||||
if(Array.isArray(result.upserted)) {
|
||||
nUpserted = result.upserted.length;
|
||||
|
||||
for(var i = 0; i < result.upserted.length; i++) {
|
||||
bulkResult.upserted.push({
|
||||
index: result.upserted[i].index + batch.originalZeroIndex
|
||||
, _id: result.upserted[i]._id
|
||||
});
|
||||
}
|
||||
} else if(result.upserted) {
|
||||
|
||||
nUpserted = 1;
|
||||
|
||||
bulkResult.upserted.push({
|
||||
index: batch.originalZeroIndex
|
||||
, _id: result.upserted
|
||||
});
|
||||
}
|
||||
|
||||
// If we have an update Batch type
|
||||
if(batch.batchType == UPDATE && result.n) {
|
||||
var nModified = result.nModified;
|
||||
bulkResult.nUpserted = bulkResult.nUpserted + nUpserted;
|
||||
bulkResult.nMatched = bulkResult.nMatched + (result.n - nUpserted);
|
||||
|
||||
if(typeof nModified == 'number') {
|
||||
bulkResult.nModified = bulkResult.nModified + nModified;
|
||||
} else {
|
||||
bulkResult.nModified = null;
|
||||
}
|
||||
}
|
||||
|
||||
if(Array.isArray(result.writeErrors)) {
|
||||
for(var i = 0; i < result.writeErrors.length; i++) {
|
||||
|
||||
var writeError = {
|
||||
index: batch.originalZeroIndex + result.writeErrors[i].index
|
||||
, code: result.writeErrors[i].code
|
||||
, errmsg: result.writeErrors[i].errmsg
|
||||
, op: batch.operations[result.writeErrors[i].index]
|
||||
};
|
||||
|
||||
bulkResult.writeErrors.push(new WriteError(writeError));
|
||||
}
|
||||
}
|
||||
|
||||
if(result.writeConcernError) {
|
||||
bulkResult.writeConcernErrors.push(new WriteConcernError(result.writeConcernError));
|
||||
}
|
||||
}
|
||||
|
||||
//
|
||||
// Clone the options
|
||||
var cloneOptions = function(options) {
|
||||
var clone = {};
|
||||
var keys = Object.keys(options);
|
||||
for(var i = 0; i < keys.length; i++) {
|
||||
clone[keys[i]] = options[keys[i]];
|
||||
}
|
||||
|
||||
return clone;
|
||||
}
|
||||
|
||||
// Exports symbols
|
||||
exports.BulkWriteResult = BulkWriteResult;
|
||||
exports.WriteError = WriteError;
|
||||
exports.Batch = Batch;
|
||||
exports.LegacyOp = LegacyOp;
|
||||
exports.mergeBatchResults = mergeBatchResults;
|
||||
exports.cloneOptions = cloneOptions;
|
||||
exports.writeConcern = writeConcern;
|
||||
exports.INVALID_BSON_ERROR = INVALID_BSON_ERROR;
|
||||
exports.WRITE_CONCERN_ERROR = WRITE_CONCERN_ERROR;
|
||||
exports.MULTIPLE_ERROR = MULTIPLE_ERROR;
|
||||
exports.UNKNOWN_ERROR = UNKNOWN_ERROR;
|
||||
exports.INSERT = INSERT;
|
||||
exports.UPDATE = UPDATE;
|
||||
exports.REMOVE = REMOVE;
|
||||
530
node_modules/mongodb/lib/bulk/ordered.js
generated
vendored
Normal file
530
node_modules/mongodb/lib/bulk/ordered.js
generated
vendored
Normal file
@@ -0,0 +1,530 @@
|
||||
"use strict";
|
||||
|
||||
var common = require('./common')
|
||||
, utils = require('../utils')
|
||||
, toError = require('../utils').toError
|
||||
, f = require('util').format
|
||||
, handleCallback = require('../utils').handleCallback
|
||||
, shallowClone = utils.shallowClone
|
||||
, WriteError = common.WriteError
|
||||
, BulkWriteResult = common.BulkWriteResult
|
||||
, LegacyOp = common.LegacyOp
|
||||
, ObjectID = require('mongodb-core').BSON.ObjectID
|
||||
, Define = require('../metadata')
|
||||
, Batch = common.Batch
|
||||
, mergeBatchResults = common.mergeBatchResults;
|
||||
|
||||
/**
|
||||
* Create a FindOperatorsOrdered instance (INTERNAL TYPE, do not instantiate directly)
|
||||
* @class
|
||||
* @return {FindOperatorsOrdered} a FindOperatorsOrdered instance.
|
||||
*/
|
||||
var FindOperatorsOrdered = function(self) {
|
||||
this.s = self.s;
|
||||
}
|
||||
|
||||
/**
|
||||
* Add a single update document to the bulk operation
|
||||
*
|
||||
* @method
|
||||
* @param {object} doc update operations
|
||||
* @throws {MongoError}
|
||||
* @return {OrderedBulkOperation}
|
||||
*/
|
||||
FindOperatorsOrdered.prototype.update = function(updateDocument) {
|
||||
// Perform upsert
|
||||
var upsert = typeof this.s.currentOp.upsert == 'boolean' ? this.s.currentOp.upsert : false;
|
||||
|
||||
// Establish the update command
|
||||
var document = {
|
||||
q: this.s.currentOp.selector
|
||||
, u: updateDocument
|
||||
, multi: true
|
||||
, upsert: upsert
|
||||
}
|
||||
|
||||
// Clear out current Op
|
||||
this.s.currentOp = null;
|
||||
// Add the update document to the list
|
||||
return addToOperationsList(this, common.UPDATE, document);
|
||||
}
|
||||
|
||||
/**
|
||||
* Add a single update one document to the bulk operation
|
||||
*
|
||||
* @method
|
||||
* @param {object} doc update operations
|
||||
* @throws {MongoError}
|
||||
* @return {OrderedBulkOperation}
|
||||
*/
|
||||
FindOperatorsOrdered.prototype.updateOne = function(updateDocument) {
|
||||
// Perform upsert
|
||||
var upsert = typeof this.s.currentOp.upsert == 'boolean' ? this.s.currentOp.upsert : false;
|
||||
|
||||
// Establish the update command
|
||||
var document = {
|
||||
q: this.s.currentOp.selector
|
||||
, u: updateDocument
|
||||
, multi: false
|
||||
, upsert: upsert
|
||||
}
|
||||
|
||||
// Clear out current Op
|
||||
this.s.currentOp = null;
|
||||
// Add the update document to the list
|
||||
return addToOperationsList(this, common.UPDATE, document);
|
||||
}
|
||||
|
||||
/**
|
||||
* Add a replace one operation to the bulk operation
|
||||
*
|
||||
* @method
|
||||
* @param {object} doc the new document to replace the existing one with
|
||||
* @throws {MongoError}
|
||||
* @return {OrderedBulkOperation}
|
||||
*/
|
||||
FindOperatorsOrdered.prototype.replaceOne = function(updateDocument) {
|
||||
this.updateOne(updateDocument);
|
||||
}
|
||||
|
||||
/**
|
||||
* Upsert modifier for update bulk operation
|
||||
*
|
||||
* @method
|
||||
* @throws {MongoError}
|
||||
* @return {FindOperatorsOrdered}
|
||||
*/
|
||||
FindOperatorsOrdered.prototype.upsert = function() {
|
||||
this.s.currentOp.upsert = true;
|
||||
return this;
|
||||
}
|
||||
|
||||
/**
|
||||
* Add a remove one operation to the bulk operation
|
||||
*
|
||||
* @method
|
||||
* @throws {MongoError}
|
||||
* @return {OrderedBulkOperation}
|
||||
*/
|
||||
FindOperatorsOrdered.prototype.deleteOne = function() {
|
||||
// Establish the update command
|
||||
var document = {
|
||||
q: this.s.currentOp.selector
|
||||
, limit: 1
|
||||
}
|
||||
|
||||
// Clear out current Op
|
||||
this.s.currentOp = null;
|
||||
// Add the remove document to the list
|
||||
return addToOperationsList(this, common.REMOVE, document);
|
||||
}
|
||||
|
||||
// Backward compatibility
|
||||
FindOperatorsOrdered.prototype.removeOne = FindOperatorsOrdered.prototype.deleteOne;
|
||||
|
||||
/**
|
||||
* Add a remove operation to the bulk operation
|
||||
*
|
||||
* @method
|
||||
* @throws {MongoError}
|
||||
* @return {OrderedBulkOperation}
|
||||
*/
|
||||
FindOperatorsOrdered.prototype.delete = function() {
|
||||
// Establish the update command
|
||||
var document = {
|
||||
q: this.s.currentOp.selector
|
||||
, limit: 0
|
||||
}
|
||||
|
||||
// Clear out current Op
|
||||
this.s.currentOp = null;
|
||||
// Add the remove document to the list
|
||||
return addToOperationsList(this, common.REMOVE, document);
|
||||
}
|
||||
|
||||
// Backward compatibility
|
||||
FindOperatorsOrdered.prototype.remove = FindOperatorsOrdered.prototype.delete;
|
||||
|
||||
// Add to internal list of documents
|
||||
var addToOperationsList = function(_self, docType, document) {
|
||||
// Get the bsonSize
|
||||
var bsonSize = _self.s.bson.calculateObjectSize(document, false);
|
||||
|
||||
// Throw error if the doc is bigger than the max BSON size
|
||||
if(bsonSize >= _self.s.maxBatchSizeBytes) throw toError("document is larger than the maximum size " + _self.s.maxBatchSizeBytes);
|
||||
// Create a new batch object if we don't have a current one
|
||||
if(_self.s.currentBatch == null) _self.s.currentBatch = new Batch(docType, _self.s.currentIndex);
|
||||
|
||||
// Check if we need to create a new batch
|
||||
if(((_self.s.currentBatchSize + 1) >= _self.s.maxWriteBatchSize)
|
||||
|| ((_self.s.currentBatchSizeBytes + _self.s.currentBatchSizeBytes) >= _self.s.maxBatchSizeBytes)
|
||||
|| (_self.s.currentBatch.batchType != docType)) {
|
||||
// Save the batch to the execution stack
|
||||
_self.s.batches.push(_self.s.currentBatch);
|
||||
|
||||
// Create a new batch
|
||||
_self.s.currentBatch = new Batch(docType, _self.s.currentIndex);
|
||||
|
||||
// Reset the current size trackers
|
||||
_self.s.currentBatchSize = 0;
|
||||
_self.s.currentBatchSizeBytes = 0;
|
||||
} else {
|
||||
// Update current batch size
|
||||
_self.s.currentBatchSize = _self.s.currentBatchSize + 1;
|
||||
_self.s.currentBatchSizeBytes = _self.s.currentBatchSizeBytes + bsonSize;
|
||||
}
|
||||
|
||||
if(docType == common.INSERT) {
|
||||
_self.s.bulkResult.insertedIds.push({index: _self.s.currentIndex, _id: document._id});
|
||||
}
|
||||
|
||||
// We have an array of documents
|
||||
if(Array.isArray(document)) {
|
||||
throw toError("operation passed in cannot be an Array");
|
||||
} else {
|
||||
_self.s.currentBatch.originalIndexes.push(_self.s.currentIndex);
|
||||
_self.s.currentBatch.operations.push(document)
|
||||
_self.s.currentIndex = _self.s.currentIndex + 1;
|
||||
}
|
||||
|
||||
// Return self
|
||||
return _self;
|
||||
}
|
||||
|
||||
/**
|
||||
* Create a new OrderedBulkOperation instance (INTERNAL TYPE, do not instantiate directly)
|
||||
* @class
|
||||
* @property {number} length Get the number of operations in the bulk.
|
||||
* @return {OrderedBulkOperation} a OrderedBulkOperation instance.
|
||||
*/
|
||||
function OrderedBulkOperation(topology, collection, options) {
|
||||
options = options == null ? {} : options;
|
||||
// TODO Bring from driver information in isMaster
|
||||
var self = this;
|
||||
var executed = false;
|
||||
|
||||
// Current item
|
||||
var currentOp = null;
|
||||
|
||||
// Handle to the bson serializer, used to calculate running sizes
|
||||
var bson = topology.bson;
|
||||
|
||||
// Namespace for the operation
|
||||
var namespace = collection.collectionName;
|
||||
|
||||
// Set max byte size
|
||||
var maxBatchSizeBytes = topology.isMasterDoc && topology.isMasterDoc.maxBsonObjectSize
|
||||
? topology.isMasterDoc.maxBsonObjectSize : (1024*1025*16);
|
||||
var maxWriteBatchSize = topology.isMasterDoc && topology.isMasterDoc.maxWriteBatchSize
|
||||
? topology.isMasterDoc.maxWriteBatchSize : 1000;
|
||||
|
||||
// Get the write concern
|
||||
var writeConcern = common.writeConcern(shallowClone(options), collection, options);
|
||||
|
||||
// Get the promiseLibrary
|
||||
var promiseLibrary = options.promiseLibrary;
|
||||
|
||||
// No promise library selected fall back
|
||||
if(!promiseLibrary) {
|
||||
promiseLibrary = typeof global.Promise == 'function' ?
|
||||
global.Promise : require('es6-promise').Promise;
|
||||
}
|
||||
|
||||
// Current batch
|
||||
var currentBatch = null;
|
||||
var currentIndex = 0;
|
||||
var currentBatchSize = 0;
|
||||
var currentBatchSizeBytes = 0;
|
||||
var batches = [];
|
||||
|
||||
// Final results
|
||||
var bulkResult = {
|
||||
ok: 1
|
||||
, writeErrors: []
|
||||
, writeConcernErrors: []
|
||||
, insertedIds: []
|
||||
, nInserted: 0
|
||||
, nUpserted: 0
|
||||
, nMatched: 0
|
||||
, nModified: 0
|
||||
, nRemoved: 0
|
||||
, upserted: []
|
||||
};
|
||||
|
||||
// Internal state
|
||||
this.s = {
|
||||
// Final result
|
||||
bulkResult: bulkResult
|
||||
// Current batch state
|
||||
, currentBatch: null
|
||||
, currentIndex: 0
|
||||
, currentBatchSize: 0
|
||||
, currentBatchSizeBytes: 0
|
||||
, batches: []
|
||||
// Write concern
|
||||
, writeConcern: writeConcern
|
||||
// Max batch size options
|
||||
, maxBatchSizeBytes: maxBatchSizeBytes
|
||||
, maxWriteBatchSize: maxWriteBatchSize
|
||||
// Namespace
|
||||
, namespace: namespace
|
||||
// BSON
|
||||
, bson: bson
|
||||
// Topology
|
||||
, topology: topology
|
||||
// Options
|
||||
, options: options
|
||||
// Current operation
|
||||
, currentOp: currentOp
|
||||
// Executed
|
||||
, executed: executed
|
||||
// Collection
|
||||
, collection: collection
|
||||
// Promise Library
|
||||
, promiseLibrary: promiseLibrary
|
||||
// Fundamental error
|
||||
, err: null
|
||||
// Bypass validation
|
||||
, bypassDocumentValidation: typeof options.bypassDocumentValidation == 'boolean' ? options.bypassDocumentValidation : false
|
||||
}
|
||||
}
|
||||
|
||||
var define = OrderedBulkOperation.define = new Define('OrderedBulkOperation', OrderedBulkOperation, false);
|
||||
|
||||
OrderedBulkOperation.prototype.raw = function(op) {
|
||||
var key = Object.keys(op)[0];
|
||||
|
||||
// Set up the force server object id
|
||||
var forceServerObjectId = typeof this.s.options.forceServerObjectId == 'boolean'
|
||||
? this.s.options.forceServerObjectId : this.s.collection.s.db.options.forceServerObjectId;
|
||||
|
||||
// Update operations
|
||||
if((op.updateOne && op.updateOne.q)
|
||||
|| (op.updateMany && op.updateMany.q)
|
||||
|| (op.replaceOne && op.replaceOne.q)) {
|
||||
op[key].multi = op.updateOne || op.replaceOne ? false : true;
|
||||
return addToOperationsList(this, common.UPDATE, op[key]);
|
||||
}
|
||||
|
||||
// Crud spec update format
|
||||
if(op.updateOne || op.updateMany || op.replaceOne) {
|
||||
var multi = op.updateOne || op.replaceOne ? false : true;
|
||||
var operation = {q: op[key].filter, u: op[key].update || op[key].replacement, multi: multi}
|
||||
operation.upsert = op[key].upsert ? true: false;
|
||||
return addToOperationsList(this, common.UPDATE, operation);
|
||||
}
|
||||
|
||||
// Remove operations
|
||||
if(op.removeOne || op.removeMany || (op.deleteOne && op.deleteOne.q) || op.deleteMany && op.deleteMany.q) {
|
||||
op[key].limit = op.removeOne ? 1 : 0;
|
||||
return addToOperationsList(this, common.REMOVE, op[key]);
|
||||
}
|
||||
|
||||
// Crud spec delete operations, less efficient
|
||||
if(op.deleteOne || op.deleteMany) {
|
||||
var limit = op.deleteOne ? 1 : 0;
|
||||
var operation = {q: op[key].filter, limit: limit}
|
||||
return addToOperationsList(this, common.REMOVE, operation);
|
||||
}
|
||||
|
||||
// Insert operations
|
||||
if(op.insertOne && op.insertOne.document == null) {
|
||||
if(forceServerObjectId !== true && op.insertOne._id == null) op.insertOne._id = new ObjectID();
|
||||
return addToOperationsList(this, common.INSERT, op.insertOne);
|
||||
} else if(op.insertOne && op.insertOne.document) {
|
||||
if(forceServerObjectId !== true && op.insertOne.document._id == null) op.insertOne.document._id = new ObjectID();
|
||||
return addToOperationsList(this, common.INSERT, op.insertOne.document);
|
||||
}
|
||||
|
||||
if(op.insertMany) {
|
||||
for(var i = 0; i < op.insertMany.length; i++) {
|
||||
if(forceServerObjectId !== true && op.insertMany[i]._id == null) op.insertMany[i]._id = new ObjectID();
|
||||
addToOperationsList(this, common.INSERT, op.insertMany[i]);
|
||||
}
|
||||
|
||||
return;
|
||||
}
|
||||
|
||||
// No valid type of operation
|
||||
throw toError("bulkWrite only supports insertOne, insertMany, updateOne, updateMany, removeOne, removeMany, deleteOne, deleteMany");
|
||||
}
|
||||
|
||||
/**
|
||||
* Add a single insert document to the bulk operation
|
||||
*
|
||||
* @param {object} doc the document to insert
|
||||
* @throws {MongoError}
|
||||
* @return {OrderedBulkOperation}
|
||||
*/
|
||||
OrderedBulkOperation.prototype.insert = function(document) {
|
||||
if(this.s.collection.s.db.options.forceServerObjectId !== true && document._id == null) document._id = new ObjectID();
|
||||
return addToOperationsList(this, common.INSERT, document);
|
||||
}
|
||||
|
||||
/**
|
||||
* Initiate a find operation for an update/updateOne/remove/removeOne/replaceOne
|
||||
*
|
||||
* @method
|
||||
* @param {object} selector The selector for the bulk operation.
|
||||
* @throws {MongoError}
|
||||
* @return {FindOperatorsOrdered}
|
||||
*/
|
||||
OrderedBulkOperation.prototype.find = function(selector) {
|
||||
if (!selector) {
|
||||
throw toError("Bulk find operation must specify a selector");
|
||||
}
|
||||
|
||||
// Save a current selector
|
||||
this.s.currentOp = {
|
||||
selector: selector
|
||||
}
|
||||
|
||||
return new FindOperatorsOrdered(this);
|
||||
}
|
||||
|
||||
Object.defineProperty(OrderedBulkOperation.prototype, 'length', {
|
||||
enumerable: true,
|
||||
get: function() {
|
||||
return this.s.currentIndex;
|
||||
}
|
||||
});
|
||||
|
||||
//
|
||||
// Execute next write command in a chain
|
||||
var executeCommands = function(self, callback) {
|
||||
if(self.s.batches.length == 0) {
|
||||
return handleCallback(callback, null, new BulkWriteResult(self.s.bulkResult));
|
||||
}
|
||||
|
||||
// Ordered execution of the command
|
||||
var batch = self.s.batches.shift();
|
||||
|
||||
var resultHandler = function(err, result) {
|
||||
// Error is a driver related error not a bulk op error, terminate
|
||||
if(err && err.driver || err && err.message) {
|
||||
return handleCallback(callback, err);
|
||||
}
|
||||
|
||||
// If we have and error
|
||||
if(err) err.ok = 0;
|
||||
// Merge the results together
|
||||
var mergeResult = mergeBatchResults(true, batch, self.s.bulkResult, err, result);
|
||||
if(mergeResult != null) {
|
||||
return handleCallback(callback, null, new BulkWriteResult(self.s.bulkResult));
|
||||
}
|
||||
|
||||
// If we are ordered and have errors and they are
|
||||
// not all replication errors terminate the operation
|
||||
if(self.s.bulkResult.writeErrors.length > 0) {
|
||||
return handleCallback(callback, toError(self.s.bulkResult.writeErrors[0]), new BulkWriteResult(self.s.bulkResult));
|
||||
}
|
||||
|
||||
// Execute the next command in line
|
||||
executeCommands(self, callback);
|
||||
}
|
||||
|
||||
var finalOptions = {ordered: true}
|
||||
if(self.s.writeConcern != null) {
|
||||
finalOptions.writeConcern = self.s.writeConcern;
|
||||
}
|
||||
|
||||
// Set an operationIf if provided
|
||||
if(self.operationId) {
|
||||
resultHandler.operationId = self.operationId;
|
||||
}
|
||||
|
||||
// Serialize functions
|
||||
if(self.s.options.serializeFunctions) {
|
||||
finalOptions.serializeFunctions = true
|
||||
}
|
||||
|
||||
// Serialize functions
|
||||
if(self.s.options.ignoreUndefined) {
|
||||
finalOptions.ignoreUndefined = true
|
||||
}
|
||||
|
||||
// Is the bypassDocumentValidation options specific
|
||||
if(self.s.bypassDocumentValidation == true) {
|
||||
finalOptions.bypassDocumentValidation = true;
|
||||
}
|
||||
|
||||
try {
|
||||
if(batch.batchType == common.INSERT) {
|
||||
self.s.topology.insert(self.s.collection.namespace, batch.operations, finalOptions, resultHandler);
|
||||
} else if(batch.batchType == common.UPDATE) {
|
||||
self.s.topology.update(self.s.collection.namespace, batch.operations, finalOptions, resultHandler);
|
||||
} else if(batch.batchType == common.REMOVE) {
|
||||
self.s.topology.remove(self.s.collection.namespace, batch.operations, finalOptions, resultHandler);
|
||||
}
|
||||
} catch(err) {
|
||||
// Force top level error
|
||||
err.ok = 0;
|
||||
// Merge top level error and return
|
||||
handleCallback(callback, null, mergeBatchResults(false, batch, self.s.bulkResult, err, null));
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* The callback format for results
|
||||
* @callback OrderedBulkOperation~resultCallback
|
||||
* @param {MongoError} error An error instance representing the error during the execution.
|
||||
* @param {BulkWriteResult} result The bulk write result.
|
||||
*/
|
||||
|
||||
/**
|
||||
* Execute the ordered bulk operation
|
||||
*
|
||||
* @method
|
||||
* @param {object} [options=null] Optional settings.
|
||||
* @param {(number|string)} [options.w=null] The write concern.
|
||||
* @param {number} [options.wtimeout=null] The write concern timeout.
|
||||
* @param {boolean} [options.j=false] Specify a journal write concern.
|
||||
* @param {boolean} [options.fsync=false] Specify a file sync write concern.
|
||||
* @param {OrderedBulkOperation~resultCallback} [callback] The result callback
|
||||
* @throws {MongoError}
|
||||
* @return {Promise} returns Promise if no callback passed
|
||||
*/
|
||||
OrderedBulkOperation.prototype.execute = function(_writeConcern, callback) {
|
||||
var self = this;
|
||||
if(this.s.executed) throw new toError("batch cannot be re-executed");
|
||||
if(typeof _writeConcern == 'function') {
|
||||
callback = _writeConcern;
|
||||
} else {
|
||||
this.s.writeConcern = _writeConcern;
|
||||
}
|
||||
|
||||
// If we have current batch
|
||||
if(this.s.currentBatch) this.s.batches.push(this.s.currentBatch);
|
||||
|
||||
// If we have no operations in the bulk raise an error
|
||||
if(this.s.batches.length == 0) {
|
||||
throw toError("Invalid Operation, No operations in bulk");
|
||||
}
|
||||
|
||||
// Execute using callback
|
||||
if(typeof callback == 'function') {
|
||||
return executeCommands(this, callback);
|
||||
}
|
||||
|
||||
// Return a Promise
|
||||
return new this.s.promiseLibrary(function(resolve, reject) {
|
||||
executeCommands(self, function(err, r) {
|
||||
if(err) return reject(err);
|
||||
resolve(r);
|
||||
});
|
||||
});
|
||||
}
|
||||
|
||||
define.classMethod('execute', {callback: true, promise:false});
|
||||
|
||||
/**
|
||||
* Returns an unordered batch object
|
||||
* @ignore
|
||||
*/
|
||||
var initializeOrderedBulkOp = function(topology, collection, options) {
|
||||
return new OrderedBulkOperation(topology, collection, options);
|
||||
}
|
||||
|
||||
initializeOrderedBulkOp.OrderedBulkOperation = OrderedBulkOperation;
|
||||
module.exports = initializeOrderedBulkOp;
|
||||
module.exports.Bulk = OrderedBulkOperation;
|
||||
539
node_modules/mongodb/lib/bulk/unordered.js
generated
vendored
Normal file
539
node_modules/mongodb/lib/bulk/unordered.js
generated
vendored
Normal file
@@ -0,0 +1,539 @@
|
||||
"use strict";
|
||||
|
||||
var common = require('./common')
|
||||
, utils = require('../utils')
|
||||
, toError = require('../utils').toError
|
||||
, f = require('util').format
|
||||
, handleCallback = require('../utils').handleCallback
|
||||
, shallowClone = utils.shallowClone
|
||||
, WriteError = common.WriteError
|
||||
, BulkWriteResult = common.BulkWriteResult
|
||||
, LegacyOp = common.LegacyOp
|
||||
, ObjectID = require('mongodb-core').BSON.ObjectID
|
||||
, Define = require('../metadata')
|
||||
, Batch = common.Batch
|
||||
, mergeBatchResults = common.mergeBatchResults;
|
||||
|
||||
/**
|
||||
* Create a FindOperatorsUnordered instance (INTERNAL TYPE, do not instantiate directly)
|
||||
* @class
|
||||
* @property {number} length Get the number of operations in the bulk.
|
||||
* @return {FindOperatorsUnordered} a FindOperatorsUnordered instance.
|
||||
*/
|
||||
var FindOperatorsUnordered = function(self) {
|
||||
this.s = self.s;
|
||||
}
|
||||
|
||||
/**
|
||||
* Add a single update document to the bulk operation
|
||||
*
|
||||
* @method
|
||||
* @param {object} doc update operations
|
||||
* @throws {MongoError}
|
||||
* @return {UnorderedBulkOperation}
|
||||
*/
|
||||
FindOperatorsUnordered.prototype.update = function(updateDocument) {
|
||||
// Perform upsert
|
||||
var upsert = typeof this.s.currentOp.upsert == 'boolean' ? this.s.currentOp.upsert : false;
|
||||
|
||||
// Establish the update command
|
||||
var document = {
|
||||
q: this.s.currentOp.selector
|
||||
, u: updateDocument
|
||||
, multi: true
|
||||
, upsert: upsert
|
||||
}
|
||||
|
||||
// Clear out current Op
|
||||
this.s.currentOp = null;
|
||||
// Add the update document to the list
|
||||
return addToOperationsList(this, common.UPDATE, document);
|
||||
}
|
||||
|
||||
/**
|
||||
* Add a single update one document to the bulk operation
|
||||
*
|
||||
* @method
|
||||
* @param {object} doc update operations
|
||||
* @throws {MongoError}
|
||||
* @return {UnorderedBulkOperation}
|
||||
*/
|
||||
FindOperatorsUnordered.prototype.updateOne = function(updateDocument) {
|
||||
// Perform upsert
|
||||
var upsert = typeof this.s.currentOp.upsert == 'boolean' ? this.s.currentOp.upsert : false;
|
||||
|
||||
// Establish the update command
|
||||
var document = {
|
||||
q: this.s.currentOp.selector
|
||||
, u: updateDocument
|
||||
, multi: false
|
||||
, upsert: upsert
|
||||
}
|
||||
|
||||
// Clear out current Op
|
||||
this.s.currentOp = null;
|
||||
// Add the update document to the list
|
||||
return addToOperationsList(this, common.UPDATE, document);
|
||||
}
|
||||
|
||||
/**
|
||||
* Add a replace one operation to the bulk operation
|
||||
*
|
||||
* @method
|
||||
* @param {object} doc the new document to replace the existing one with
|
||||
* @throws {MongoError}
|
||||
* @return {UnorderedBulkOperation}
|
||||
*/
|
||||
FindOperatorsUnordered.prototype.replaceOne = function(updateDocument) {
|
||||
this.updateOne(updateDocument);
|
||||
}
|
||||
|
||||
/**
|
||||
* Upsert modifier for update bulk operation
|
||||
*
|
||||
* @method
|
||||
* @throws {MongoError}
|
||||
* @return {UnorderedBulkOperation}
|
||||
*/
|
||||
FindOperatorsUnordered.prototype.upsert = function() {
|
||||
this.s.currentOp.upsert = true;
|
||||
return this;
|
||||
}
|
||||
|
||||
/**
|
||||
* Add a remove one operation to the bulk operation
|
||||
*
|
||||
* @method
|
||||
* @throws {MongoError}
|
||||
* @return {UnorderedBulkOperation}
|
||||
*/
|
||||
FindOperatorsUnordered.prototype.removeOne = function() {
|
||||
// Establish the update command
|
||||
var document = {
|
||||
q: this.s.currentOp.selector
|
||||
, limit: 1
|
||||
}
|
||||
|
||||
// Clear out current Op
|
||||
this.s.currentOp = null;
|
||||
// Add the remove document to the list
|
||||
return addToOperationsList(this, common.REMOVE, document);
|
||||
}
|
||||
|
||||
/**
|
||||
* Add a remove operation to the bulk operation
|
||||
*
|
||||
* @method
|
||||
* @throws {MongoError}
|
||||
* @return {UnorderedBulkOperation}
|
||||
*/
|
||||
FindOperatorsUnordered.prototype.remove = function() {
|
||||
// Establish the update command
|
||||
var document = {
|
||||
q: this.s.currentOp.selector
|
||||
, limit: 0
|
||||
}
|
||||
|
||||
// Clear out current Op
|
||||
this.s.currentOp = null;
|
||||
// Add the remove document to the list
|
||||
return addToOperationsList(this, common.REMOVE, document);
|
||||
}
|
||||
|
||||
//
|
||||
// Add to the operations list
|
||||
//
|
||||
var addToOperationsList = function(_self, docType, document) {
|
||||
// Get the bsonSize
|
||||
var bsonSize = _self.s.bson.calculateObjectSize(document, false);
|
||||
// Throw error if the doc is bigger than the max BSON size
|
||||
if(bsonSize >= _self.s.maxBatchSizeBytes) throw toError("document is larger than the maximum size " + _self.s.maxBatchSizeBytes);
|
||||
// Holds the current batch
|
||||
_self.s.currentBatch = null;
|
||||
// Get the right type of batch
|
||||
if(docType == common.INSERT) {
|
||||
_self.s.currentBatch = _self.s.currentInsertBatch;
|
||||
} else if(docType == common.UPDATE) {
|
||||
_self.s.currentBatch = _self.s.currentUpdateBatch;
|
||||
} else if(docType == common.REMOVE) {
|
||||
_self.s.currentBatch = _self.s.currentRemoveBatch;
|
||||
}
|
||||
|
||||
// Create a new batch object if we don't have a current one
|
||||
if(_self.s.currentBatch == null) _self.s.currentBatch = new Batch(docType, _self.s.currentIndex);
|
||||
|
||||
// Check if we need to create a new batch
|
||||
if(((_self.s.currentBatch.size + 1) >= _self.s.maxWriteBatchSize)
|
||||
|| ((_self.s.currentBatch.sizeBytes + bsonSize) >= _self.s.maxBatchSizeBytes)
|
||||
|| (_self.s.currentBatch.batchType != docType)) {
|
||||
// Save the batch to the execution stack
|
||||
_self.s.batches.push(_self.s.currentBatch);
|
||||
|
||||
// Create a new batch
|
||||
_self.s.currentBatch = new Batch(docType, _self.s.currentIndex);
|
||||
}
|
||||
|
||||
// We have an array of documents
|
||||
if(Array.isArray(document)) {
|
||||
throw toError("operation passed in cannot be an Array");
|
||||
} else {
|
||||
_self.s.currentBatch.operations.push(document);
|
||||
_self.s.currentBatch.originalIndexes.push(_self.s.currentIndex);
|
||||
_self.s.currentIndex = _self.s.currentIndex + 1;
|
||||
}
|
||||
|
||||
// Save back the current Batch to the right type
|
||||
if(docType == common.INSERT) {
|
||||
_self.s.currentInsertBatch = _self.s.currentBatch;
|
||||
_self.s.bulkResult.insertedIds.push({index: _self.s.currentIndex, _id: document._id});
|
||||
} else if(docType == common.UPDATE) {
|
||||
_self.s.currentUpdateBatch = _self.s.currentBatch;
|
||||
} else if(docType == common.REMOVE) {
|
||||
_self.s.currentRemoveBatch = _self.s.currentBatch;
|
||||
}
|
||||
|
||||
// Update current batch size
|
||||
_self.s.currentBatch.size = _self.s.currentBatch.size + 1;
|
||||
_self.s.currentBatch.sizeBytes = _self.s.currentBatch.sizeBytes + bsonSize;
|
||||
|
||||
// Return self
|
||||
return _self;
|
||||
}
|
||||
|
||||
/**
|
||||
* Create a new UnorderedBulkOperation instance (INTERNAL TYPE, do not instantiate directly)
|
||||
* @class
|
||||
* @return {UnorderedBulkOperation} a UnorderedBulkOperation instance.
|
||||
*/
|
||||
var UnorderedBulkOperation = function(topology, collection, options) {
|
||||
options = options == null ? {} : options;
|
||||
|
||||
// Contains reference to self
|
||||
var self = this;
|
||||
// Get the namesspace for the write operations
|
||||
var namespace = collection.collectionName;
|
||||
// Used to mark operation as executed
|
||||
var executed = false;
|
||||
|
||||
// Current item
|
||||
// var currentBatch = null;
|
||||
var currentOp = null;
|
||||
var currentIndex = 0;
|
||||
var batches = [];
|
||||
|
||||
// The current Batches for the different operations
|
||||
var currentInsertBatch = null;
|
||||
var currentUpdateBatch = null;
|
||||
var currentRemoveBatch = null;
|
||||
|
||||
// Handle to the bson serializer, used to calculate running sizes
|
||||
var bson = topology.bson;
|
||||
|
||||
// Set max byte size
|
||||
var maxBatchSizeBytes = topology.isMasterDoc && topology.isMasterDoc.maxBsonObjectSize
|
||||
? topology.isMasterDoc.maxBsonObjectSize : (1024*1025*16);
|
||||
var maxWriteBatchSize = topology.isMasterDoc && topology.isMasterDoc.maxWriteBatchSize
|
||||
? topology.isMasterDoc.maxWriteBatchSize : 1000;
|
||||
|
||||
// Get the write concern
|
||||
var writeConcern = common.writeConcern(shallowClone(options), collection, options);
|
||||
|
||||
// Get the promiseLibrary
|
||||
var promiseLibrary = options.promiseLibrary;
|
||||
|
||||
// No promise library selected fall back
|
||||
if(!promiseLibrary) {
|
||||
promiseLibrary = typeof global.Promise == 'function' ?
|
||||
global.Promise : require('es6-promise').Promise;
|
||||
}
|
||||
|
||||
// Final results
|
||||
var bulkResult = {
|
||||
ok: 1
|
||||
, writeErrors: []
|
||||
, writeConcernErrors: []
|
||||
, insertedIds: []
|
||||
, nInserted: 0
|
||||
, nUpserted: 0
|
||||
, nMatched: 0
|
||||
, nModified: 0
|
||||
, nRemoved: 0
|
||||
, upserted: []
|
||||
};
|
||||
|
||||
// Internal state
|
||||
this.s = {
|
||||
// Final result
|
||||
bulkResult: bulkResult
|
||||
// Current batch state
|
||||
, currentInsertBatch: null
|
||||
, currentUpdateBatch: null
|
||||
, currentRemoveBatch: null
|
||||
, currentBatch: null
|
||||
, currentIndex: 0
|
||||
, batches: []
|
||||
// Write concern
|
||||
, writeConcern: writeConcern
|
||||
// Max batch size options
|
||||
, maxBatchSizeBytes: maxBatchSizeBytes
|
||||
, maxWriteBatchSize: maxWriteBatchSize
|
||||
// Namespace
|
||||
, namespace: namespace
|
||||
// BSON
|
||||
, bson: bson
|
||||
// Topology
|
||||
, topology: topology
|
||||
// Options
|
||||
, options: options
|
||||
// Current operation
|
||||
, currentOp: currentOp
|
||||
// Executed
|
||||
, executed: executed
|
||||
// Collection
|
||||
, collection: collection
|
||||
// Promise Library
|
||||
, promiseLibrary: promiseLibrary
|
||||
// Bypass validation
|
||||
, bypassDocumentValidation: typeof options.bypassDocumentValidation == 'boolean' ? options.bypassDocumentValidation : false
|
||||
}
|
||||
}
|
||||
|
||||
var define = UnorderedBulkOperation.define = new Define('UnorderedBulkOperation', UnorderedBulkOperation, false);
|
||||
|
||||
/**
|
||||
* Add a single insert document to the bulk operation
|
||||
*
|
||||
* @param {object} doc the document to insert
|
||||
* @throws {MongoError}
|
||||
* @return {UnorderedBulkOperation}
|
||||
*/
|
||||
UnorderedBulkOperation.prototype.insert = function(document) {
|
||||
if(this.s.collection.s.db.options.forceServerObjectId !== true && document._id == null) document._id = new ObjectID();
|
||||
return addToOperationsList(this, common.INSERT, document);
|
||||
}
|
||||
|
||||
/**
|
||||
* Initiate a find operation for an update/updateOne/remove/removeOne/replaceOne
|
||||
*
|
||||
* @method
|
||||
* @param {object} selector The selector for the bulk operation.
|
||||
* @throws {MongoError}
|
||||
* @return {FindOperatorsUnordered}
|
||||
*/
|
||||
UnorderedBulkOperation.prototype.find = function(selector) {
|
||||
if (!selector) {
|
||||
throw toError("Bulk find operation must specify a selector");
|
||||
}
|
||||
|
||||
// Save a current selector
|
||||
this.s.currentOp = {
|
||||
selector: selector
|
||||
}
|
||||
|
||||
return new FindOperatorsUnordered(this);
|
||||
}
|
||||
|
||||
Object.defineProperty(UnorderedBulkOperation.prototype, 'length', {
|
||||
enumerable: true,
|
||||
get: function() {
|
||||
return this.s.currentIndex;
|
||||
}
|
||||
});
|
||||
|
||||
UnorderedBulkOperation.prototype.raw = function(op) {
|
||||
var key = Object.keys(op)[0];
|
||||
|
||||
// Set up the force server object id
|
||||
var forceServerObjectId = typeof this.s.options.forceServerObjectId == 'boolean'
|
||||
? this.s.options.forceServerObjectId : this.s.collection.s.db.options.forceServerObjectId;
|
||||
|
||||
// Update operations
|
||||
if((op.updateOne && op.updateOne.q)
|
||||
|| (op.updateMany && op.updateMany.q)
|
||||
|| (op.replaceOne && op.replaceOne.q)) {
|
||||
op[key].multi = op.updateOne || op.replaceOne ? false : true;
|
||||
return addToOperationsList(this, common.UPDATE, op[key]);
|
||||
}
|
||||
|
||||
// Crud spec update format
|
||||
if(op.updateOne || op.updateMany || op.replaceOne) {
|
||||
var multi = op.updateOne || op.replaceOne ? false : true;
|
||||
var operation = {q: op[key].filter, u: op[key].update || op[key].replacement, multi: multi}
|
||||
if(op[key].upsert) operation.upsert = true;
|
||||
return addToOperationsList(this, common.UPDATE, operation);
|
||||
}
|
||||
|
||||
// Remove operations
|
||||
if(op.removeOne || op.removeMany || (op.deleteOne && op.deleteOne.q) || op.deleteMany && op.deleteMany.q) {
|
||||
op[key].limit = op.removeOne ? 1 : 0;
|
||||
return addToOperationsList(this, common.REMOVE, op[key]);
|
||||
}
|
||||
|
||||
// Crud spec delete operations, less efficient
|
||||
if(op.deleteOne || op.deleteMany) {
|
||||
var limit = op.deleteOne ? 1 : 0;
|
||||
var operation = {q: op[key].filter, limit: limit}
|
||||
return addToOperationsList(this, common.REMOVE, operation);
|
||||
}
|
||||
|
||||
// Insert operations
|
||||
if(op.insertOne && op.insertOne.document == null) {
|
||||
if(forceServerObjectId !== true && op.insertOne._id == null) op.insertOne._id = new ObjectID();
|
||||
return addToOperationsList(this, common.INSERT, op.insertOne);
|
||||
} else if(op.insertOne && op.insertOne.document) {
|
||||
if(forceServerObjectId !== true && op.insertOne.document._id == null) op.insertOne.document._id = new ObjectID();
|
||||
return addToOperationsList(this, common.INSERT, op.insertOne.document);
|
||||
}
|
||||
|
||||
if(op.insertMany) {
|
||||
for(var i = 0; i < op.insertMany.length; i++) {
|
||||
if(forceServerObjectId !== true && op.insertMany[i]._id == null) op.insertMany[i]._id = new ObjectID();
|
||||
addToOperationsList(this, common.INSERT, op.insertMany[i]);
|
||||
}
|
||||
|
||||
return;
|
||||
}
|
||||
|
||||
// No valid type of operation
|
||||
throw toError("bulkWrite only supports insertOne, insertMany, updateOne, updateMany, removeOne, removeMany, deleteOne, deleteMany");
|
||||
}
|
||||
|
||||
//
|
||||
// Execute the command
|
||||
var executeBatch = function(self, batch, callback) {
|
||||
var finalOptions = {ordered: false}
|
||||
if(self.s.writeConcern != null) {
|
||||
finalOptions.writeConcern = self.s.writeConcern;
|
||||
}
|
||||
|
||||
var resultHandler = function(err, result) {
|
||||
// Error is a driver related error not a bulk op error, terminate
|
||||
if(err && err.driver || err && err.message) {
|
||||
return handleCallback(callback, err);
|
||||
}
|
||||
|
||||
// If we have and error
|
||||
if(err) err.ok = 0;
|
||||
handleCallback(callback, null, mergeBatchResults(false, batch, self.s.bulkResult, err, result));
|
||||
}
|
||||
|
||||
// Set an operationIf if provided
|
||||
if(self.operationId) {
|
||||
resultHandler.operationId = self.operationId;
|
||||
}
|
||||
|
||||
// Serialize functions
|
||||
if(self.s.options.serializeFunctions) {
|
||||
finalOptions.serializeFunctions = true
|
||||
}
|
||||
|
||||
// Is the bypassDocumentValidation options specific
|
||||
if(self.s.bypassDocumentValidation == true) {
|
||||
finalOptions.bypassDocumentValidation = true;
|
||||
}
|
||||
|
||||
try {
|
||||
if(batch.batchType == common.INSERT) {
|
||||
self.s.topology.insert(self.s.collection.namespace, batch.operations, finalOptions, resultHandler);
|
||||
} else if(batch.batchType == common.UPDATE) {
|
||||
self.s.topology.update(self.s.collection.namespace, batch.operations, finalOptions, resultHandler);
|
||||
} else if(batch.batchType == common.REMOVE) {
|
||||
self.s.topology.remove(self.s.collection.namespace, batch.operations, finalOptions, resultHandler);
|
||||
}
|
||||
} catch(err) {
|
||||
// Force top level error
|
||||
err.ok = 0;
|
||||
// Merge top level error and return
|
||||
handleCallback(callback, null, mergeBatchResults(false, batch, self.s.bulkResult, err, null));
|
||||
}
|
||||
}
|
||||
|
||||
//
|
||||
// Execute all the commands
|
||||
var executeBatches = function(self, callback) {
|
||||
var numberOfCommandsToExecute = self.s.batches.length;
|
||||
var error = null;
|
||||
// Execute over all the batches
|
||||
for(var i = 0; i < self.s.batches.length; i++) {
|
||||
executeBatch(self, self.s.batches[i], function(err, result) {
|
||||
// Driver layer error capture it
|
||||
if(err) error = err;
|
||||
// Count down the number of commands left to execute
|
||||
numberOfCommandsToExecute = numberOfCommandsToExecute - 1;
|
||||
|
||||
// Execute
|
||||
if(numberOfCommandsToExecute == 0) {
|
||||
// Driver level error
|
||||
if(error) return handleCallback(callback, error);
|
||||
// Treat write errors
|
||||
var error = self.s.bulkResult.writeErrors.length > 0 ? toError(self.s.bulkResult.writeErrors[0]) : null;
|
||||
handleCallback(callback, error, new BulkWriteResult(self.s.bulkResult));
|
||||
}
|
||||
});
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* The callback format for results
|
||||
* @callback UnorderedBulkOperation~resultCallback
|
||||
* @param {MongoError} error An error instance representing the error during the execution.
|
||||
* @param {BulkWriteResult} result The bulk write result.
|
||||
*/
|
||||
|
||||
/**
|
||||
* Execute the ordered bulk operation
|
||||
*
|
||||
* @method
|
||||
* @param {object} [options=null] Optional settings.
|
||||
* @param {(number|string)} [options.w=null] The write concern.
|
||||
* @param {number} [options.wtimeout=null] The write concern timeout.
|
||||
* @param {boolean} [options.j=false] Specify a journal write concern.
|
||||
* @param {boolean} [options.fsync=false] Specify a file sync write concern.
|
||||
* @param {UnorderedBulkOperation~resultCallback} [callback] The result callback
|
||||
* @throws {MongoError}
|
||||
* @return {Promise} returns Promise if no callback passed
|
||||
*/
|
||||
UnorderedBulkOperation.prototype.execute = function(_writeConcern, callback) {
|
||||
var self = this;
|
||||
if(this.s.executed) throw toError("batch cannot be re-executed");
|
||||
if(typeof _writeConcern == 'function') {
|
||||
callback = _writeConcern;
|
||||
} else {
|
||||
this.s.writeConcern = _writeConcern;
|
||||
}
|
||||
|
||||
// If we have current batch
|
||||
if(this.s.currentInsertBatch) this.s.batches.push(this.s.currentInsertBatch);
|
||||
if(this.s.currentUpdateBatch) this.s.batches.push(this.s.currentUpdateBatch);
|
||||
if(this.s.currentRemoveBatch) this.s.batches.push(this.s.currentRemoveBatch);
|
||||
|
||||
// If we have no operations in the bulk raise an error
|
||||
if(this.s.batches.length == 0) {
|
||||
throw toError("Invalid Operation, No operations in bulk");
|
||||
}
|
||||
|
||||
// Execute using callback
|
||||
if(typeof callback == 'function') return executeBatches(this, callback);
|
||||
|
||||
// Return a Promise
|
||||
return new this.s.promiseLibrary(function(resolve, reject) {
|
||||
executeBatches(self, function(err, r) {
|
||||
if(err) return reject(err);
|
||||
resolve(r);
|
||||
});
|
||||
});
|
||||
}
|
||||
|
||||
define.classMethod('execute', {callback: true, promise:false});
|
||||
|
||||
/**
|
||||
* Returns an unordered batch object
|
||||
* @ignore
|
||||
*/
|
||||
var initializeUnorderedBulkOp = function(topology, collection, options) {
|
||||
return new UnorderedBulkOperation(topology, collection, options);
|
||||
}
|
||||
|
||||
initializeUnorderedBulkOp.UnorderedBulkOperation = UnorderedBulkOperation;
|
||||
module.exports = initializeUnorderedBulkOp;
|
||||
module.exports.Bulk = UnorderedBulkOperation;
|
||||
3162
node_modules/mongodb/lib/collection.js
generated
vendored
Normal file
3162
node_modules/mongodb/lib/collection.js
generated
vendored
Normal file
File diff suppressed because it is too large
Load Diff
318
node_modules/mongodb/lib/command_cursor.js
generated
vendored
Normal file
318
node_modules/mongodb/lib/command_cursor.js
generated
vendored
Normal file
@@ -0,0 +1,318 @@
|
||||
"use strict";
|
||||
|
||||
var inherits = require('util').inherits
|
||||
, f = require('util').format
|
||||
, toError = require('./utils').toError
|
||||
, getSingleProperty = require('./utils').getSingleProperty
|
||||
, formattedOrderClause = require('./utils').formattedOrderClause
|
||||
, handleCallback = require('./utils').handleCallback
|
||||
, Logger = require('mongodb-core').Logger
|
||||
, EventEmitter = require('events').EventEmitter
|
||||
, ReadPreference = require('./read_preference')
|
||||
, MongoError = require('mongodb-core').MongoError
|
||||
, Readable = require('stream').Readable || require('readable-stream').Readable
|
||||
, Define = require('./metadata')
|
||||
, CoreCursor = require('./cursor')
|
||||
, Query = require('mongodb-core').Query
|
||||
, CoreReadPreference = require('mongodb-core').ReadPreference;
|
||||
|
||||
/**
|
||||
* @fileOverview The **CommandCursor** class is an internal class that embodies a
|
||||
* generalized cursor based on a MongoDB command allowing for iteration over the
|
||||
* results returned. It supports one by one document iteration, conversion to an
|
||||
* array or can be iterated as a Node 0.10.X or higher stream
|
||||
*
|
||||
* **CommandCursor Cannot directly be instantiated**
|
||||
* @example
|
||||
* var MongoClient = require('mongodb').MongoClient,
|
||||
* test = require('assert');
|
||||
* // Connection url
|
||||
* var url = 'mongodb://localhost:27017/test';
|
||||
* // Connect using MongoClient
|
||||
* MongoClient.connect(url, function(err, db) {
|
||||
* // Create a collection we want to drop later
|
||||
* var col = db.collection('listCollectionsExample1');
|
||||
* // Insert a bunch of documents
|
||||
* col.insert([{a:1, b:1}
|
||||
* , {a:2, b:2}, {a:3, b:3}
|
||||
* , {a:4, b:4}], {w:1}, function(err, result) {
|
||||
* test.equal(null, err);
|
||||
*
|
||||
* // List the database collections available
|
||||
* db.listCollections().toArray(function(err, items) {
|
||||
* test.equal(null, err);
|
||||
* db.close();
|
||||
* });
|
||||
* });
|
||||
* });
|
||||
*/
|
||||
|
||||
/**
|
||||
* Namespace provided by the browser.
|
||||
* @external Readable
|
||||
*/
|
||||
|
||||
/**
|
||||
* Creates a new Command Cursor instance (INTERNAL TYPE, do not instantiate directly)
|
||||
* @class CommandCursor
|
||||
* @extends external:Readable
|
||||
* @fires CommandCursor#data
|
||||
* @fires CommandCursor#end
|
||||
* @fires CommandCursor#close
|
||||
* @fires CommandCursor#readable
|
||||
* @return {CommandCursor} an CommandCursor instance.
|
||||
*/
|
||||
var CommandCursor = function(bson, ns, cmd, options, topology, topologyOptions) {
|
||||
CoreCursor.apply(this, Array.prototype.slice.call(arguments, 0));
|
||||
var self = this;
|
||||
var state = CommandCursor.INIT;
|
||||
var streamOptions = {};
|
||||
|
||||
// MaxTimeMS
|
||||
var maxTimeMS = null;
|
||||
|
||||
// Get the promiseLibrary
|
||||
var promiseLibrary = options.promiseLibrary;
|
||||
|
||||
// No promise library selected fall back
|
||||
if(!promiseLibrary) {
|
||||
promiseLibrary = typeof global.Promise == 'function' ?
|
||||
global.Promise : require('es6-promise').Promise;
|
||||
}
|
||||
|
||||
// Set up
|
||||
Readable.call(this, {objectMode: true});
|
||||
|
||||
// Internal state
|
||||
this.s = {
|
||||
// MaxTimeMS
|
||||
maxTimeMS: maxTimeMS
|
||||
// State
|
||||
, state: state
|
||||
// Stream options
|
||||
, streamOptions: streamOptions
|
||||
// BSON
|
||||
, bson: bson
|
||||
// Namespae
|
||||
, ns: ns
|
||||
// Command
|
||||
, cmd: cmd
|
||||
// Options
|
||||
, options: options
|
||||
// Topology
|
||||
, topology: topology
|
||||
// Topology Options
|
||||
, topologyOptions: topologyOptions
|
||||
// Promise library
|
||||
, promiseLibrary: promiseLibrary
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* CommandCursor stream data event, fired for each document in the cursor.
|
||||
*
|
||||
* @event CommandCursor#data
|
||||
* @type {object}
|
||||
*/
|
||||
|
||||
/**
|
||||
* CommandCursor stream end event
|
||||
*
|
||||
* @event CommandCursor#end
|
||||
* @type {null}
|
||||
*/
|
||||
|
||||
/**
|
||||
* CommandCursor stream close event
|
||||
*
|
||||
* @event CommandCursor#close
|
||||
* @type {null}
|
||||
*/
|
||||
|
||||
/**
|
||||
* CommandCursor stream readable event
|
||||
*
|
||||
* @event CommandCursor#readable
|
||||
* @type {null}
|
||||
*/
|
||||
|
||||
// Inherit from Readable
|
||||
inherits(CommandCursor, Readable);
|
||||
|
||||
// Set the methods to inherit from prototype
|
||||
var methodsToInherit = ['_next', 'next', 'each', 'forEach', 'toArray'
|
||||
, 'rewind', 'bufferedCount', 'readBufferedDocuments', 'close', 'isClosed', 'kill'
|
||||
, '_find', '_getmore', '_killcursor', 'isDead', 'explain', 'isNotified', 'isKilled'];
|
||||
|
||||
// Only inherit the types we need
|
||||
for(var i = 0; i < methodsToInherit.length; i++) {
|
||||
CommandCursor.prototype[methodsToInherit[i]] = CoreCursor.prototype[methodsToInherit[i]];
|
||||
}
|
||||
|
||||
var define = CommandCursor.define = new Define('CommandCursor', CommandCursor, true);
|
||||
|
||||
/**
|
||||
* Set the ReadPreference for the cursor.
|
||||
* @method
|
||||
* @param {(string|ReadPreference)} readPreference The new read preference for the cursor.
|
||||
* @throws {MongoError}
|
||||
* @return {Cursor}
|
||||
*/
|
||||
CommandCursor.prototype.setReadPreference = function(r) {
|
||||
if(this.s.state == CommandCursor.CLOSED || this.isDead()) throw MongoError.create({message: "Cursor is closed", driver:true});
|
||||
if(this.s.state != CommandCursor.INIT) throw MongoError.create({message: 'cannot change cursor readPreference after cursor has been accessed', driver:true});
|
||||
|
||||
if(r instanceof ReadPreference) {
|
||||
this.s.options.readPreference = new CoreReadPreference(r.mode, r.tags);
|
||||
} else {
|
||||
this.s.options.readPreference = new CoreReadPreference(r);
|
||||
}
|
||||
|
||||
return this;
|
||||
}
|
||||
|
||||
define.classMethod('setReadPreference', {callback: false, promise:false, returns: [CommandCursor]});
|
||||
|
||||
/**
|
||||
* Set the batch size for the cursor.
|
||||
* @method
|
||||
* @param {number} value The batchSize for the cursor.
|
||||
* @throws {MongoError}
|
||||
* @return {CommandCursor}
|
||||
*/
|
||||
CommandCursor.prototype.batchSize = function(value) {
|
||||
if(this.s.state == CommandCursor.CLOSED || this.isDead()) throw MongoError.create({message: "Cursor is closed", driver:true});
|
||||
if(typeof value != 'number') throw MongoError.create({message: "batchSize requires an integer", driver:true});
|
||||
if(this.s.cmd.cursor) this.s.cmd.cursor.batchSize = value;
|
||||
this.setCursorBatchSize(value);
|
||||
return this;
|
||||
}
|
||||
|
||||
define.classMethod('batchSize', {callback: false, promise:false, returns: [CommandCursor]});
|
||||
|
||||
/**
|
||||
* Add a maxTimeMS stage to the aggregation pipeline
|
||||
* @method
|
||||
* @param {number} value The state maxTimeMS value.
|
||||
* @return {CommandCursor}
|
||||
*/
|
||||
CommandCursor.prototype.maxTimeMS = function(value) {
|
||||
if(this.s.topology.lastIsMaster().minWireVersion > 2) {
|
||||
this.s.cmd.maxTimeMS = value;
|
||||
}
|
||||
return this;
|
||||
}
|
||||
|
||||
define.classMethod('maxTimeMS', {callback: false, promise:false, returns: [CommandCursor]});
|
||||
|
||||
CommandCursor.prototype.get = CommandCursor.prototype.toArray;
|
||||
|
||||
define.classMethod('get', {callback: true, promise:false});
|
||||
|
||||
// Inherited methods
|
||||
define.classMethod('toArray', {callback: true, promise:true});
|
||||
define.classMethod('each', {callback: true, promise:false});
|
||||
define.classMethod('forEach', {callback: true, promise:false});
|
||||
define.classMethod('next', {callback: true, promise:true});
|
||||
define.classMethod('close', {callback: true, promise:true});
|
||||
define.classMethod('isClosed', {callback: false, promise:false, returns: [Boolean]});
|
||||
define.classMethod('rewind', {callback: false, promise:false});
|
||||
define.classMethod('bufferedCount', {callback: false, promise:false, returns: [Number]});
|
||||
define.classMethod('readBufferedDocuments', {callback: false, promise:false, returns: [Array]});
|
||||
|
||||
/**
|
||||
* Get the next available document from the cursor, returns null if no more documents are available.
|
||||
* @function CommandCursor.prototype.next
|
||||
* @param {CommandCursor~resultCallback} [callback] The result callback.
|
||||
* @throws {MongoError}
|
||||
* @return {Promise} returns Promise if no callback passed
|
||||
*/
|
||||
|
||||
/**
|
||||
* The callback format for results
|
||||
* @callback CommandCursor~toArrayResultCallback
|
||||
* @param {MongoError} error An error instance representing the error during the execution.
|
||||
* @param {object[]} documents All the documents the satisfy the cursor.
|
||||
*/
|
||||
|
||||
/**
|
||||
* Returns an array of documents. The caller is responsible for making sure that there
|
||||
* is enough memory to store the results. Note that the array only contain partial
|
||||
* results when this cursor had been previouly accessed.
|
||||
* @method CommandCursor.prototype.toArray
|
||||
* @param {CommandCursor~toArrayResultCallback} [callback] The result callback.
|
||||
* @throws {MongoError}
|
||||
* @return {Promise} returns Promise if no callback passed
|
||||
*/
|
||||
|
||||
/**
|
||||
* The callback format for results
|
||||
* @callback CommandCursor~resultCallback
|
||||
* @param {MongoError} error An error instance representing the error during the execution.
|
||||
* @param {(object|null)} result The result object if the command was executed successfully.
|
||||
*/
|
||||
|
||||
/**
|
||||
* Iterates over all the documents for this cursor. As with **{cursor.toArray}**,
|
||||
* not all of the elements will be iterated if this cursor had been previouly accessed.
|
||||
* In that case, **{cursor.rewind}** can be used to reset the cursor. However, unlike
|
||||
* **{cursor.toArray}**, the cursor will only hold a maximum of batch size elements
|
||||
* at any given time if batch size is specified. Otherwise, the caller is responsible
|
||||
* for making sure that the entire result can fit the memory.
|
||||
* @method CommandCursor.prototype.each
|
||||
* @param {CommandCursor~resultCallback} callback The result callback.
|
||||
* @throws {MongoError}
|
||||
* @return {null}
|
||||
*/
|
||||
|
||||
/**
|
||||
* Close the cursor, sending a KillCursor command and emitting close.
|
||||
* @method CommandCursor.prototype.close
|
||||
* @param {CommandCursor~resultCallback} [callback] The result callback.
|
||||
* @return {Promise} returns Promise if no callback passed
|
||||
*/
|
||||
|
||||
/**
|
||||
* Is the cursor closed
|
||||
* @method CommandCursor.prototype.isClosed
|
||||
* @return {boolean}
|
||||
*/
|
||||
|
||||
/**
|
||||
* Clone the cursor
|
||||
* @function CommandCursor.prototype.clone
|
||||
* @return {CommandCursor}
|
||||
*/
|
||||
|
||||
/**
|
||||
* Resets the cursor
|
||||
* @function CommandCursor.prototype.rewind
|
||||
* @return {CommandCursor}
|
||||
*/
|
||||
|
||||
/**
|
||||
* The callback format for the forEach iterator method
|
||||
* @callback CommandCursor~iteratorCallback
|
||||
* @param {Object} doc An emitted document for the iterator
|
||||
*/
|
||||
|
||||
/**
|
||||
* The callback error format for the forEach iterator method
|
||||
* @callback CommandCursor~endCallback
|
||||
* @param {MongoError} error An error instance representing the error during the execution.
|
||||
*/
|
||||
|
||||
/*
|
||||
* Iterates over all the documents for this cursor using the iterator, callback pattern.
|
||||
* @method CommandCursor.prototype.forEach
|
||||
* @param {CommandCursor~iteratorCallback} iterator The iteration callback.
|
||||
* @param {CommandCursor~endCallback} callback The end callback.
|
||||
* @throws {MongoError}
|
||||
* @return {null}
|
||||
*/
|
||||
|
||||
CommandCursor.INIT = 0;
|
||||
CommandCursor.OPEN = 1;
|
||||
CommandCursor.CLOSED = 2;
|
||||
|
||||
module.exports = CommandCursor;
|
||||
1200
node_modules/mongodb/lib/cursor.js
generated
vendored
Normal file
1200
node_modules/mongodb/lib/cursor.js
generated
vendored
Normal file
File diff suppressed because it is too large
Load Diff
1779
node_modules/mongodb/lib/db.js
generated
vendored
Normal file
1779
node_modules/mongodb/lib/db.js
generated
vendored
Normal file
File diff suppressed because it is too large
Load Diff
310
node_modules/mongodb/lib/gridfs-stream/download.js
generated
vendored
Normal file
310
node_modules/mongodb/lib/gridfs-stream/download.js
generated
vendored
Normal file
@@ -0,0 +1,310 @@
|
||||
var shallowClone = require('../utils').shallowClone;
|
||||
var stream = require('stream');
|
||||
var util = require('util');
|
||||
|
||||
module.exports = GridFSBucketReadStream;
|
||||
|
||||
/**
|
||||
* A readable stream that enables you to read buffers from GridFS.
|
||||
*
|
||||
* Do not instantiate this class directly. Use `openDownloadStream()` instead.
|
||||
*
|
||||
* @class
|
||||
* @param {Collection} chunks Handle for chunks collection
|
||||
* @param {Collection} files Handle for files collection
|
||||
* @param {Object} readPreference The read preference to use
|
||||
* @param {Object} filter The query to use to find the file document
|
||||
* @param {Object} [options=null] Optional settings.
|
||||
* @param {Number} [options.sort=null] Optional sort for the file find query
|
||||
* @param {Number} [options.skip=null] Optional skip for the file find query
|
||||
* @param {Number} [options.start=null] Optional 0-based offset in bytes to start streaming from
|
||||
* @param {Number} [options.end=null] Optional 0-based offset in bytes to stop streaming before
|
||||
* @fires GridFSBucketReadStream#error
|
||||
* @fires GridFSBucketReadStream#file
|
||||
* @return {GridFSBucketReadStream} a GridFSBucketReadStream instance.
|
||||
*/
|
||||
|
||||
function GridFSBucketReadStream(chunks, files, readPreference, filter, options) {
|
||||
var _this = this;
|
||||
this.s = {
|
||||
bytesRead: 0,
|
||||
chunks: chunks,
|
||||
cursor: null,
|
||||
expected: 0,
|
||||
files: files,
|
||||
filter: filter,
|
||||
init: false,
|
||||
expectedEnd: 0,
|
||||
file: null,
|
||||
options: options,
|
||||
readPreference: readPreference
|
||||
};
|
||||
|
||||
stream.Readable.call(this);
|
||||
}
|
||||
|
||||
util.inherits(GridFSBucketReadStream, stream.Readable);
|
||||
|
||||
/**
|
||||
* An error occurred
|
||||
*
|
||||
* @event GridFSBucketReadStream#error
|
||||
* @type {Error}
|
||||
*/
|
||||
|
||||
/**
|
||||
* Fires when the stream loaded the file document corresponding to the
|
||||
* provided id.
|
||||
*
|
||||
* @event GridFSBucketReadStream#file
|
||||
* @type {object}
|
||||
*/
|
||||
|
||||
/**
|
||||
* Reads from the cursor and pushes to the stream.
|
||||
* @method
|
||||
*/
|
||||
|
||||
GridFSBucketReadStream.prototype._read = function() {
|
||||
var _this = this;
|
||||
waitForFile(_this, function() {
|
||||
doRead(_this);
|
||||
});
|
||||
};
|
||||
|
||||
/**
|
||||
* Sets the 0-based offset in bytes to start streaming from. Throws
|
||||
* an error if this stream has entered flowing mode
|
||||
* (e.g. if you've already called `on('data')`)
|
||||
* @method
|
||||
* @param {Number} start Offset in bytes to start reading at
|
||||
* @return {GridFSBucketReadStream}
|
||||
*/
|
||||
|
||||
GridFSBucketReadStream.prototype.start = function(start) {
|
||||
throwIfInitialized(this);
|
||||
this.s.options.start = start;
|
||||
return this;
|
||||
};
|
||||
|
||||
/**
|
||||
* Sets the 0-based offset in bytes to start streaming from. Throws
|
||||
* an error if this stream has entered flowing mode
|
||||
* (e.g. if you've already called `on('data')`)
|
||||
* @method
|
||||
* @param {Number} end Offset in bytes to stop reading at
|
||||
* @return {GridFSBucketReadStream}
|
||||
*/
|
||||
|
||||
GridFSBucketReadStream.prototype.end = function(end) {
|
||||
throwIfInitialized(this);
|
||||
this.s.options.end = end;
|
||||
return this;
|
||||
};
|
||||
|
||||
/**
|
||||
* @ignore
|
||||
*/
|
||||
|
||||
function throwIfInitialized(self) {
|
||||
if (self.s.init) {
|
||||
throw new Error('You cannot change options after the stream has entered' +
|
||||
'flowing mode!');
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* @ignore
|
||||
*/
|
||||
|
||||
function doRead(_this) {
|
||||
_this.s.cursor.next(function(error, doc) {
|
||||
if (error) {
|
||||
return __handleError(_this, error);
|
||||
}
|
||||
if (!doc) {
|
||||
return _this.push(null);
|
||||
}
|
||||
|
||||
var bytesRemaining = _this.s.file.length - _this.s.bytesRead;
|
||||
var expectedN = _this.s.expected++;
|
||||
var expectedLength = Math.min(_this.s.file.chunkSize,
|
||||
bytesRemaining);
|
||||
if (doc.n > expectedN) {
|
||||
var errmsg = 'ChunkIsMissing: Got unexpected n: ' + doc.n +
|
||||
', expected: ' + expectedN;
|
||||
return __handleError(_this, new Error(errmsg));
|
||||
}
|
||||
if (doc.n < expectedN) {
|
||||
var errmsg = 'ExtraChunk: Got unexpected n: ' + doc.n +
|
||||
', expected: ' + expectedN;
|
||||
return __handleError(_this, new Error(errmsg));
|
||||
}
|
||||
if (doc.data.length() !== expectedLength) {
|
||||
if (bytesRemaining <= 0) {
|
||||
var errmsg = 'ExtraChunk: Got unexpected n: ' + doc.n;
|
||||
return __handleError(_this, new Error(errmsg));
|
||||
}
|
||||
var errmsg = 'ChunkIsWrongSize: Got unexpected length: ' +
|
||||
doc.data.length() + ', expected: ' + expectedLength;
|
||||
return __handleError(_this, new Error(errmsg));
|
||||
}
|
||||
|
||||
_this.s.bytesRead += doc.data.length();
|
||||
|
||||
if (doc.data.buffer.length === 0) {
|
||||
return _this.push(null);
|
||||
}
|
||||
|
||||
var sliceStart = null;
|
||||
var sliceEnd = null;
|
||||
var buf = doc.data.buffer;
|
||||
if (_this.s.bytesToSkip != null) {
|
||||
sliceStart = _this.s.bytesToSkip;
|
||||
_this.s.bytesToSkip = 0;
|
||||
}
|
||||
|
||||
if (expectedN === _this.s.expectedEnd && _this.s.bytesToTrim != null) {
|
||||
sliceEnd = _this.s.bytesToTrim;
|
||||
}
|
||||
|
||||
if (sliceStart != null || sliceEnd != null) {
|
||||
buf = buf.slice(sliceStart || 0, sliceEnd || buf.length);
|
||||
}
|
||||
|
||||
_this.push(buf);
|
||||
});
|
||||
};
|
||||
|
||||
/**
|
||||
* @ignore
|
||||
*/
|
||||
|
||||
function init(self) {
|
||||
var findOneOptions = {};
|
||||
if (self.s.readPreference) {
|
||||
findOneOptions.readPreference = self.s.readPreference;
|
||||
}
|
||||
if (self.s.options && self.s.options.sort) {
|
||||
findOneOptions.sort = self.s.options.sort;
|
||||
}
|
||||
if (self.s.options && self.s.options.skip) {
|
||||
findOneOptions.skip = self.s.options.skip;
|
||||
}
|
||||
|
||||
self.s.files.findOne(self.s.filter, findOneOptions, function(error, doc) {
|
||||
if (error) {
|
||||
return __handleError(self, error);
|
||||
}
|
||||
if (!doc) {
|
||||
var identifier = self.s.filter._id ?
|
||||
self.s.filter._id.toString() : self.s.filter.filename;
|
||||
var errmsg = 'FileNotFound: file ' + identifier + ' was not found';
|
||||
return __handleError(self, new Error(errmsg));
|
||||
}
|
||||
|
||||
// If document is empty, kill the stream immediately and don't
|
||||
// execute any reads
|
||||
if (doc.length <= 0) {
|
||||
self.push(null);
|
||||
return;
|
||||
}
|
||||
|
||||
self.s.cursor = self.s.chunks.find({ files_id: doc._id }).sort({ n: 1 });
|
||||
if (self.s.readPreference) {
|
||||
self.s.cursor.setReadPreference(self.s.readPreference);
|
||||
}
|
||||
|
||||
self.s.expectedEnd = Math.ceil(doc.length / doc.chunkSize);
|
||||
self.s.file = doc;
|
||||
self.s.bytesToSkip = handleStartOption(self, doc, self.s.cursor,
|
||||
self.s.options);
|
||||
self.s.bytesToTrim = handleEndOption(self, doc, self.s.cursor,
|
||||
self.s.options);
|
||||
self.emit('file', doc);
|
||||
});
|
||||
}
|
||||
|
||||
/**
|
||||
* @ignore
|
||||
*/
|
||||
|
||||
function waitForFile(_this, callback) {
|
||||
if (_this.s.file) {
|
||||
return callback();
|
||||
}
|
||||
|
||||
if (!_this.s.init) {
|
||||
init(_this);
|
||||
_this.s.init = true;
|
||||
}
|
||||
|
||||
_this.once('file', function() {
|
||||
callback();
|
||||
});
|
||||
};
|
||||
|
||||
/**
|
||||
* @ignore
|
||||
*/
|
||||
|
||||
function handleStartOption(stream, doc, cursor, options) {
|
||||
if (options && options.start != null) {
|
||||
if (options.start > doc.length) {
|
||||
throw new Error('Stream start (' + options.start + ') must not be ' +
|
||||
'more than the length of the file (' + doc.length +')')
|
||||
}
|
||||
if (options.start < 0) {
|
||||
throw new Error('Stream start (' + options.start + ') must not be ' +
|
||||
'negative');
|
||||
}
|
||||
if (options.end != null && options.end < options.start) {
|
||||
throw new Error('Stream start (' + options.start + ') must not be ' +
|
||||
'greater than stream end (' + options.end + ')');
|
||||
}
|
||||
|
||||
cursor.skip(Math.floor(options.start / doc.chunkSize));
|
||||
|
||||
stream.s.bytesRead = Math.floor(options.start / doc.chunkSize) *
|
||||
doc.chunkSize;
|
||||
stream.s.expected = Math.floor(options.start / doc.chunkSize);
|
||||
|
||||
return options.start - stream.s.bytesRead;
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* @ignore
|
||||
*/
|
||||
|
||||
function handleEndOption(stream, doc, cursor, options) {
|
||||
if (options && options.end != null) {
|
||||
if (options.end > doc.length) {
|
||||
throw new Error('Stream end (' + options.end + ') must not be ' +
|
||||
'more than the length of the file (' + doc.length +')')
|
||||
}
|
||||
if (options.start < 0) {
|
||||
throw new Error('Stream end (' + options.end + ') must not be ' +
|
||||
'negative');
|
||||
}
|
||||
|
||||
var start = options.start != null ?
|
||||
Math.floor(options.start / doc.chunkSize) :
|
||||
0;
|
||||
|
||||
cursor.limit(Math.ceil(options.end / doc.chunkSize) - start);
|
||||
|
||||
stream.s.expectedEnd = Math.ceil(options.end / doc.chunkSize);
|
||||
|
||||
return (Math.ceil(options.end / doc.chunkSize) * doc.chunkSize) -
|
||||
options.end;
|
||||
}
|
||||
}
|
||||
|
||||
/**
|
||||
* @ignore
|
||||
*/
|
||||
|
||||
function __handleError(_this, error) {
|
||||
_this.emit('error', error);
|
||||
}
|
||||
335
node_modules/mongodb/lib/gridfs-stream/index.js
generated
vendored
Normal file
335
node_modules/mongodb/lib/gridfs-stream/index.js
generated
vendored
Normal file
@@ -0,0 +1,335 @@
|
||||
var Emitter = require('events').EventEmitter;
|
||||
var GridFSBucketReadStream = require('./download');
|
||||
var GridFSBucketWriteStream = require('./upload');
|
||||
var shallowClone = require('../utils').shallowClone;
|
||||
var toError = require('../utils').toError;
|
||||
var util = require('util');
|
||||
|
||||
var DEFAULT_GRIDFS_BUCKET_OPTIONS = {
|
||||
bucketName: 'fs',
|
||||
chunkSizeBytes: 255 * 1024
|
||||
};
|
||||
|
||||
module.exports = GridFSBucket;
|
||||
|
||||
/**
|
||||
* Constructor for a streaming GridFS interface
|
||||
* @class
|
||||
* @param {Db} db A db handle
|
||||
* @param {object} [options=null] Optional settings.
|
||||
* @param {string} [options.bucketName="fs"] The 'files' and 'chunks' collections will be prefixed with the bucket name followed by a dot.
|
||||
* @param {number} [options.chunkSizeBytes=255 * 1024] Number of bytes stored in each chunk. Defaults to 255KB
|
||||
* @param {object} [options.writeConcern=null] Optional write concern to be passed to write operations, for instance `{ w: 1 }`
|
||||
* @param {object} [options.readPreference=null] Optional read preference to be passed to read operations
|
||||
* @fires GridFSBucketWriteStream#index
|
||||
* @return {GridFSBucket}
|
||||
*/
|
||||
|
||||
function GridFSBucket(db, options) {
|
||||
Emitter.apply(this);
|
||||
this.setMaxListeners(0);
|
||||
|
||||
if (options && typeof options === 'object') {
|
||||
options = shallowClone(options);
|
||||
var keys = Object.keys(DEFAULT_GRIDFS_BUCKET_OPTIONS);
|
||||
for (var i = 0; i < keys.length; ++i) {
|
||||
if (!options[keys[i]]) {
|
||||
options[keys[i]] = DEFAULT_GRIDFS_BUCKET_OPTIONS[keys[i]];
|
||||
}
|
||||
}
|
||||
} else {
|
||||
options = DEFAULT_GRIDFS_BUCKET_OPTIONS;
|
||||
}
|
||||
|
||||
this.s = {
|
||||
db: db,
|
||||
options: options,
|
||||
_chunksCollection: db.collection(options.bucketName + '.chunks'),
|
||||
_filesCollection: db.collection(options.bucketName + '.files'),
|
||||
checkedIndexes: false,
|
||||
calledOpenUploadStream: false,
|
||||
promiseLibrary: db.s.promiseLibrary ||
|
||||
(typeof global.Promise == 'function' ? global.Promise : require('es6-promise').Promise)
|
||||
};
|
||||
};
|
||||
|
||||
util.inherits(GridFSBucket, Emitter);
|
||||
|
||||
/**
|
||||
* When the first call to openUploadStream is made, the upload stream will
|
||||
* check to see if it needs to create the proper indexes on the chunks and
|
||||
* files collections. This event is fired either when 1) it determines that
|
||||
* no index creation is necessary, 2) when it successfully creates the
|
||||
* necessary indexes.
|
||||
*
|
||||
* @event GridFSBucket#index
|
||||
* @type {Error}
|
||||
*/
|
||||
|
||||
/**
|
||||
* Returns a writable stream (GridFSBucketWriteStream) for writing
|
||||
* buffers to GridFS. The stream's 'id' property contains the resulting
|
||||
* file's id.
|
||||
* @method
|
||||
* @param {string} filename The value of the 'filename' key in the files doc
|
||||
* @param {object} [options=null] Optional settings.
|
||||
* @param {number} [options.chunkSizeBytes=null] Optional overwrite this bucket's chunkSizeBytes for this file
|
||||
* @param {object} [options.metadata=null] Optional object to store in the file document's `metadata` field
|
||||
* @param {string} [options.contentType=null] Optional string to store in the file document's `contentType` field
|
||||
* @param {array} [options.aliases=null] Optional array of strings to store in the file document's `aliases` field
|
||||
* @return {GridFSBucketWriteStream}
|
||||
*/
|
||||
|
||||
GridFSBucket.prototype.openUploadStream = function(filename, options) {
|
||||
if (options) {
|
||||
options = shallowClone(options);
|
||||
} else {
|
||||
options = {};
|
||||
}
|
||||
if (!options.chunkSizeBytes) {
|
||||
options.chunkSizeBytes = this.s.options.chunkSizeBytes;
|
||||
}
|
||||
return new GridFSBucketWriteStream(this, filename, options);
|
||||
};
|
||||
|
||||
/**
|
||||
* Returns a readable stream (GridFSBucketReadStream) for streaming file
|
||||
* data from GridFS.
|
||||
* @method
|
||||
* @param {ObjectId} id The id of the file doc
|
||||
* @param {Object} [options=null] Optional settings.
|
||||
* @param {Number} [options.start=null] Optional 0-based offset in bytes to start streaming from
|
||||
* @param {Number} [options.end=null] Optional 0-based offset in bytes to stop streaming before
|
||||
* @return {GridFSBucketReadStream}
|
||||
*/
|
||||
|
||||
GridFSBucket.prototype.openDownloadStream = function(id, options) {
|
||||
var filter = { _id: id };
|
||||
var options = {
|
||||
start: options && options.start,
|
||||
end: options && options.end
|
||||
};
|
||||
return new GridFSBucketReadStream(this.s._chunksCollection,
|
||||
this.s._filesCollection, this.s.options.readPreference, filter, options);
|
||||
};
|
||||
|
||||
/**
|
||||
* Deletes a file with the given id
|
||||
* @method
|
||||
* @param {ObjectId} id The id of the file doc
|
||||
* @param {Function} callback
|
||||
*/
|
||||
|
||||
GridFSBucket.prototype.delete = function(id, callback) {
|
||||
if (typeof callback === 'function') {
|
||||
return _delete(this, id, callback);
|
||||
}
|
||||
|
||||
var _this = this;
|
||||
return new this.s.promiseLibrary(function(resolve, reject) {
|
||||
_delete(_this, id, function(error, res) {
|
||||
if (error) {
|
||||
reject(error);
|
||||
} else {
|
||||
resolve(res);
|
||||
}
|
||||
});
|
||||
});
|
||||
};
|
||||
|
||||
/**
|
||||
* @ignore
|
||||
*/
|
||||
|
||||
function _delete(_this, id, callback) {
|
||||
_this.s._filesCollection.deleteOne({ _id: id }, function(error, res) {
|
||||
if (error) {
|
||||
return callback(error);
|
||||
}
|
||||
|
||||
_this.s._chunksCollection.deleteMany({ files_id: id }, function(error) {
|
||||
if (error) {
|
||||
return callback(error);
|
||||
}
|
||||
|
||||
// Delete orphaned chunks before returning FileNotFound
|
||||
if (!res.result.n) {
|
||||
var errmsg = 'FileNotFound: no file with id ' + id + ' found';
|
||||
return callback(new Error(errmsg));
|
||||
}
|
||||
|
||||
callback();
|
||||
});
|
||||
});
|
||||
}
|
||||
|
||||
/**
|
||||
* Convenience wrapper around find on the files collection
|
||||
* @method
|
||||
* @param {Object} filter
|
||||
* @param {Object} [options=null] Optional settings for cursor
|
||||
* @param {number} [options.batchSize=null] Optional batch size for cursor
|
||||
* @param {number} [options.limit=null] Optional limit for cursor
|
||||
* @param {number} [options.maxTimeMS=null] Optional maxTimeMS for cursor
|
||||
* @param {boolean} [options.noCursorTimeout=null] Optionally set cursor's `noCursorTimeout` flag
|
||||
* @param {number} [options.skip=null] Optional skip for cursor
|
||||
* @param {object} [options.sort=null] Optional sort for cursor
|
||||
* @return {Cursor}
|
||||
*/
|
||||
|
||||
GridFSBucket.prototype.find = function(filter, options) {
|
||||
filter = filter || {};
|
||||
options = options || {};
|
||||
|
||||
var cursor = this.s._filesCollection.find(filter);
|
||||
|
||||
if (options.batchSize != null) {
|
||||
cursor.batchSize(options.batchSize);
|
||||
}
|
||||
if (options.limit != null) {
|
||||
cursor.limit(options.limit);
|
||||
}
|
||||
if (options.maxTimeMS != null) {
|
||||
cursor.maxTimeMS(options.maxTimeMS);
|
||||
}
|
||||
if (options.noCursorTimeout != null) {
|
||||
cursor.addCursorFlag('noCursorTimeout', options.noCursorTimeout);
|
||||
}
|
||||
if (options.skip != null) {
|
||||
cursor.skip(options.skip);
|
||||
}
|
||||
if (options.sort != null) {
|
||||
cursor.sort(options.sort);
|
||||
}
|
||||
|
||||
return cursor;
|
||||
};
|
||||
|
||||
/**
|
||||
* Returns a readable stream (GridFSBucketReadStream) for streaming the
|
||||
* file with the given name from GridFS. If there are multiple files with
|
||||
* the same name, this will stream the most recent file with the given name
|
||||
* (as determined by the `uploadedDate` field). You can set the `revision`
|
||||
* option to change this behavior.
|
||||
* @method
|
||||
* @param {String} filename The name of the file to stream
|
||||
* @param {Object} [options=null] Optional settings
|
||||
* @param {number} [options.revision=-1] The revision number relative to the oldest file with the given filename. 0 gets you the oldest file, 1 gets you the 2nd oldest, -1 gets you the newest.
|
||||
* @param {Number} [options.start=null] Optional 0-based offset in bytes to start streaming from
|
||||
* @param {Number} [options.end=null] Optional 0-based offset in bytes to stop streaming before
|
||||
* @return {GridFSBucketReadStream}
|
||||
*/
|
||||
|
||||
GridFSBucket.prototype.openDownloadStreamByName = function(filename, options) {
|
||||
var sort = { uploadedDate: -1 };
|
||||
var skip = null;
|
||||
if (options && options.revision != null) {
|
||||
if (options.revision >= 0) {
|
||||
sort = { uploadedDate: 1 };
|
||||
skip = options.revision;
|
||||
} else {
|
||||
skip = -options.revision - 1;
|
||||
}
|
||||
}
|
||||
|
||||
var filter = { filename: filename };
|
||||
var options = {
|
||||
sort: sort,
|
||||
skip: skip,
|
||||
start: options && options.start,
|
||||
end: options && options.end
|
||||
};
|
||||
return new GridFSBucketReadStream(this.s._chunksCollection,
|
||||
this.s._filesCollection, this.s.options.readPreference, filter, options);
|
||||
};
|
||||
|
||||
/**
|
||||
* Renames the file with the given _id to the given string
|
||||
* @method
|
||||
* @param {ObjectId} id the id of the file to rename
|
||||
* @param {String} filename new name for the file
|
||||
* @param {GridFSBucket~errorCallback} [callback]
|
||||
*/
|
||||
|
||||
GridFSBucket.prototype.rename = function(id, filename, callback) {
|
||||
if (typeof callback === 'function') {
|
||||
return _rename(this, id, filename, callback);
|
||||
}
|
||||
|
||||
var _this = this;
|
||||
return new this.s.promiseLibrary(function(resolve, reject) {
|
||||
_rename(_this, id, filename, function(error, res) {
|
||||
if (error) {
|
||||
reject(error);
|
||||
} else {
|
||||
resolve(res);
|
||||
}
|
||||
});
|
||||
});
|
||||
};
|
||||
|
||||
/**
|
||||
* @ignore
|
||||
*/
|
||||
|
||||
function _rename(_this, id, filename, callback) {
|
||||
var filter = { _id: id };
|
||||
var update = { $set: { filename: filename } };
|
||||
_this.s._filesCollection.updateOne(filter, update, function(error, res) {
|
||||
if (error) {
|
||||
return callback(error);
|
||||
}
|
||||
if (!res.result.n) {
|
||||
return callback(toError('File with id ' + id + ' not found'));
|
||||
}
|
||||
callback();
|
||||
});
|
||||
}
|
||||
|
||||
/**
|
||||
* Removes this bucket's files collection, followed by its chunks collection.
|
||||
* @method
|
||||
* @param {GridFSBucket~errorCallback} [callback]
|
||||
*/
|
||||
|
||||
GridFSBucket.prototype.drop = function(callback) {
|
||||
if (typeof callback === 'function') {
|
||||
return _drop(this, callback);
|
||||
}
|
||||
|
||||
var _this = this;
|
||||
return new this.s.promiseLibrary(function(resolve, reject) {
|
||||
_drop(_this, function(error, res) {
|
||||
if (error) {
|
||||
reject(error);
|
||||
} else {
|
||||
resolve(res);
|
||||
}
|
||||
});
|
||||
});
|
||||
};
|
||||
|
||||
/**
|
||||
* @ignore
|
||||
*/
|
||||
|
||||
function _drop(_this, callback) {
|
||||
_this.s._filesCollection.drop(function(error) {
|
||||
if (error) {
|
||||
return callback(error);
|
||||
}
|
||||
_this.s._chunksCollection.drop(function(error) {
|
||||
if (error) {
|
||||
return callback(error);
|
||||
}
|
||||
|
||||
return callback();
|
||||
});
|
||||
});
|
||||
}
|
||||
|
||||
/**
|
||||
* Callback format for all GridFSBucket methods that can accept a callback.
|
||||
* @callback GridFSBucket~errorCallback
|
||||
* @param {MongoError} error An error instance representing any errors that occurred
|
||||
*/
|
||||
450
node_modules/mongodb/lib/gridfs-stream/upload.js
generated
vendored
Normal file
450
node_modules/mongodb/lib/gridfs-stream/upload.js
generated
vendored
Normal file
@@ -0,0 +1,450 @@
|
||||
var core = require('mongodb-core');
|
||||
var crypto = require('crypto');
|
||||
var shallowClone = require('../utils').shallowClone;
|
||||
var stream = require('stream');
|
||||
var util = require('util');
|
||||
|
||||
var ERROR_NAMESPACE_NOT_FOUND = 26;
|
||||
|
||||
module.exports = GridFSBucketWriteStream;
|
||||
|
||||
/**
|
||||
* A writable stream that enables you to write buffers to GridFS.
|
||||
*
|
||||
* Do not instantiate this class directly. Use `openUploadStream()` instead.
|
||||
*
|
||||
* @class
|
||||
* @param {GridFSBucket} bucket Handle for this stream's corresponding bucket
|
||||
* @param {string} filename The value of the 'filename' key in the files doc
|
||||
* @param {object} [options=null] Optional settings.
|
||||
* @param {number} [options.chunkSizeBytes=null] The chunk size to use, in bytes
|
||||
* @param {number} [options.w=null] The write concern
|
||||
* @param {number} [options.wtimeout=null] The write concern timeout
|
||||
* @param {number} [options.j=null] The journal write concern
|
||||
* @fires GridFSBucketWriteStream#error
|
||||
* @fires GridFSBucketWriteStream#finish
|
||||
* @return {GridFSBucketWriteStream} a GridFSBucketWriteStream instance.
|
||||
*/
|
||||
|
||||
function GridFSBucketWriteStream(bucket, filename, options) {
|
||||
this.bucket = bucket;
|
||||
this.chunks = bucket.s._chunksCollection;
|
||||
this.filename = filename;
|
||||
this.files = bucket.s._filesCollection;
|
||||
this.options = options;
|
||||
|
||||
this.id = core.BSON.ObjectId();
|
||||
this.chunkSizeBytes = this.options.chunkSizeBytes;
|
||||
this.bufToStore = new Buffer(this.chunkSizeBytes);
|
||||
this.length = 0;
|
||||
this.md5 = crypto.createHash('md5');
|
||||
this.n = 0;
|
||||
this.pos = 0;
|
||||
this.state = {
|
||||
streamEnd: false,
|
||||
outstandingRequests: 0,
|
||||
errored: false
|
||||
};
|
||||
|
||||
if (!this.bucket.s.calledOpenUploadStream) {
|
||||
this.bucket.s.calledOpenUploadStream = true;
|
||||
|
||||
var _this = this;
|
||||
checkIndexes(this, function() {
|
||||
_this.bucket.s.checkedIndexes = true;
|
||||
_this.bucket.emit('index');
|
||||
});
|
||||
}
|
||||
}
|
||||
|
||||
util.inherits(GridFSBucketWriteStream, stream.Writable);
|
||||
|
||||
/**
|
||||
* An error occurred
|
||||
*
|
||||
* @event GridFSBucketWriteStream#error
|
||||
* @type {Error}
|
||||
*/
|
||||
|
||||
/**
|
||||
* end() was called and the write stream successfully wrote all chunks to
|
||||
* MongoDB.
|
||||
*
|
||||
* @event GridFSBucketWriteStream#finish
|
||||
* @type {object}
|
||||
*/
|
||||
|
||||
/**
|
||||
* Write a buffer to the stream.
|
||||
*
|
||||
* @method
|
||||
* @param {Buffer} chunk Buffer to write
|
||||
* @param {String} encoding Optional encoding for the buffer
|
||||
* @param {Function} callback Function to call when the chunk was added to the buffer, or if the entire chunk was persisted to MongoDB if this chunk caused a flush.
|
||||
* @return {Boolean} False if this write required flushing a chunk to MongoDB. True otherwise.
|
||||
*/
|
||||
|
||||
GridFSBucketWriteStream.prototype.write = function(chunk, encoding, callback) {
|
||||
var _this = this;
|
||||
return waitForIndexes(this, function() {
|
||||
return doWrite(_this, chunk, encoding, callback);
|
||||
});
|
||||
};
|
||||
|
||||
/**
|
||||
* Tells the stream that no more data will be coming in. The stream will
|
||||
* persist the remaining data to MongoDB, write the files document, and
|
||||
* then emit a 'finish' event.
|
||||
*
|
||||
* @method
|
||||
* @param {Buffer} chunk Buffer to write
|
||||
* @param {String} encoding Optional encoding for the buffer
|
||||
* @param {Function} callback Function to call when all files and chunks have been persisted to MongoDB
|
||||
*/
|
||||
|
||||
GridFSBucketWriteStream.prototype.end = function(chunk, encoding, callback) {
|
||||
var _this = this;
|
||||
this.state.streamEnd = true;
|
||||
|
||||
if (callback) {
|
||||
this.once('finish', callback);
|
||||
}
|
||||
|
||||
if (!chunk) {
|
||||
waitForIndexes(this, function() {
|
||||
writeRemnant(_this);
|
||||
});
|
||||
return;
|
||||
}
|
||||
|
||||
var _this = this;
|
||||
var inputBuf = (Buffer.isBuffer(chunk)) ?
|
||||
chunk : new Buffer(chunk, encoding);
|
||||
|
||||
this.write(chunk, encoding, function() {
|
||||
writeRemnant(_this);
|
||||
});
|
||||
};
|
||||
|
||||
/**
|
||||
* @ignore
|
||||
*/
|
||||
|
||||
function __handleError(_this, error, callback) {
|
||||
if (_this.state.errored) {
|
||||
return;
|
||||
}
|
||||
_this.state.errored = true;
|
||||
if (callback) {
|
||||
return callback(error);
|
||||
}
|
||||
_this.emit('error', error);
|
||||
}
|
||||
|
||||
/**
|
||||
* @ignore
|
||||
*/
|
||||
|
||||
function createChunkDoc(filesId, n, data) {
|
||||
return {
|
||||
_id: core.BSON.ObjectId(),
|
||||
files_id: filesId,
|
||||
n: n,
|
||||
data: data
|
||||
};
|
||||
}
|
||||
|
||||
/**
|
||||
* @ignore
|
||||
*/
|
||||
|
||||
function checkChunksIndex(_this, callback) {
|
||||
_this.chunks.listIndexes().toArray(function(error, indexes) {
|
||||
if (error) {
|
||||
// Collection doesn't exist so create index
|
||||
if (error.code === ERROR_NAMESPACE_NOT_FOUND) {
|
||||
var index = { files_id: 1, n: 1 };
|
||||
_this.chunks.createIndex(index, { background: false }, function(error) {
|
||||
if (error) {
|
||||
return callback(error);
|
||||
}
|
||||
|
||||
callback();
|
||||
});
|
||||
return;
|
||||
}
|
||||
return callback(error);
|
||||
}
|
||||
|
||||
var hasChunksIndex = false;
|
||||
indexes.forEach(function(index) {
|
||||
if (index.key) {
|
||||
var keys = Object.keys(index.key);
|
||||
if (keys.length === 2 && index.key.files_id === 1 &&
|
||||
index.key.n === 1) {
|
||||
hasChunksIndex = true;
|
||||
}
|
||||
}
|
||||
});
|
||||
|
||||
if (hasChunksIndex) {
|
||||
callback();
|
||||
} else {
|
||||
var index = { files_id: 1, n: 1 };
|
||||
var indexOptions = getWriteOptions(_this);
|
||||
|
||||
indexOptions.background = false;
|
||||
indexOptions.unique = true;
|
||||
|
||||
_this.chunks.createIndex(index, indexOptions, function(error) {
|
||||
if (error) {
|
||||
return callback(error);
|
||||
}
|
||||
|
||||
callback();
|
||||
});
|
||||
}
|
||||
});
|
||||
}
|
||||
|
||||
/**
|
||||
* @ignore
|
||||
*/
|
||||
|
||||
function checkDone(_this, callback) {
|
||||
if (_this.state.streamEnd &&
|
||||
_this.state.outstandingRequests === 0 &&
|
||||
!_this.state.errored) {
|
||||
var filesDoc = createFilesDoc(_this.id, _this.length, _this.chunkSizeBytes,
|
||||
_this.md5.digest('hex'), _this.filename, _this.options.contentType,
|
||||
_this.options.aliases, _this.options.metadata);
|
||||
|
||||
_this.files.insert(filesDoc, getWriteOptions(_this), function(error) {
|
||||
if (error) {
|
||||
return __handleError(_this, error, callback);
|
||||
}
|
||||
_this.emit('finish', filesDoc);
|
||||
});
|
||||
|
||||
return true;
|
||||
}
|
||||
|
||||
return false;
|
||||
}
|
||||
|
||||
/**
|
||||
* @ignore
|
||||
*/
|
||||
|
||||
function checkIndexes(_this, callback) {
|
||||
_this.files.findOne({}, { _id: 1 }, function(error, doc) {
|
||||
if (error) {
|
||||
return callback(error);
|
||||
}
|
||||
if (doc) {
|
||||
return callback();
|
||||
}
|
||||
|
||||
_this.files.listIndexes().toArray(function(error, indexes) {
|
||||
if (error) {
|
||||
// Collection doesn't exist so create index
|
||||
if (error.code === ERROR_NAMESPACE_NOT_FOUND) {
|
||||
var index = { filename: 1, uploadDate: 1 };
|
||||
_this.files.createIndex(index, { background: false }, function(error) {
|
||||
if (error) {
|
||||
return callback(error);
|
||||
}
|
||||
|
||||
checkChunksIndex(_this, callback);
|
||||
});
|
||||
return;
|
||||
}
|
||||
return callback(error);
|
||||
}
|
||||
|
||||
var hasFileIndex = false;
|
||||
indexes.forEach(function(index) {
|
||||
var keys = Object.keys(index.key);
|
||||
if (keys.length === 2 && index.key.filename === 1 &&
|
||||
index.key.uploadDate === 1) {
|
||||
hasFileIndex = true;
|
||||
}
|
||||
});
|
||||
|
||||
if (hasFileIndex) {
|
||||
checkChunksIndex(_this, callback);
|
||||
} else {
|
||||
var index = { filename: 1, uploadDate: 1 };
|
||||
|
||||
var indexOptions = getWriteOptions(_this);
|
||||
|
||||
indexOptions.background = false;
|
||||
|
||||
_this.files.createIndex(index, indexOptions, function(error) {
|
||||
if (error) {
|
||||
return callback(error);
|
||||
}
|
||||
|
||||
checkChunksIndex(_this, callback);
|
||||
});
|
||||
}
|
||||
});
|
||||
});
|
||||
}
|
||||
|
||||
/**
|
||||
* @ignore
|
||||
*/
|
||||
|
||||
function createFilesDoc(_id, length, chunkSize, md5, filename, contentType,
|
||||
aliases, metadata) {
|
||||
var ret = {
|
||||
_id: _id,
|
||||
length: length,
|
||||
chunkSize: chunkSize,
|
||||
uploadDate: new Date(),
|
||||
md5: md5,
|
||||
filename: filename
|
||||
};
|
||||
|
||||
if (contentType) {
|
||||
ret.contentType = contentType;
|
||||
}
|
||||
|
||||
if (aliases) {
|
||||
ret.aliases = aliases;
|
||||
}
|
||||
|
||||
if (metadata) {
|
||||
ret.metadata = metadata;
|
||||
}
|
||||
|
||||
return ret;
|
||||
}
|
||||
|
||||
/**
|
||||
* @ignore
|
||||
*/
|
||||
|
||||
function doWrite(_this, chunk, encoding, callback) {
|
||||
|
||||
var inputBuf = (Buffer.isBuffer(chunk)) ?
|
||||
chunk : new Buffer(chunk, encoding);
|
||||
|
||||
_this.length += inputBuf.length;
|
||||
|
||||
// Input is small enough to fit in our buffer
|
||||
if (_this.pos + inputBuf.length < _this.chunkSizeBytes) {
|
||||
inputBuf.copy(_this.bufToStore, _this.pos);
|
||||
_this.pos += inputBuf.length;
|
||||
|
||||
callback && callback();
|
||||
|
||||
// Note that we reverse the typical semantics of write's return value
|
||||
// to be compatible with node's `.pipe()` function.
|
||||
// True means client can keep writing.
|
||||
return true;
|
||||
}
|
||||
|
||||
// Otherwise, buffer is too big for current chunk, so we need to flush
|
||||
// to MongoDB.
|
||||
var inputBufRemaining = inputBuf.length;
|
||||
var spaceRemaining = _this.chunkSizeBytes - _this.pos;
|
||||
var numToCopy = Math.min(spaceRemaining, inputBuf.length);
|
||||
var outstandingRequests = 0;
|
||||
while (inputBufRemaining > 0) {
|
||||
var inputBufPos = inputBuf.length - inputBufRemaining;
|
||||
inputBuf.copy(_this.bufToStore, _this.pos,
|
||||
inputBufPos, inputBufPos + numToCopy);
|
||||
_this.pos += numToCopy;
|
||||
spaceRemaining -= numToCopy;
|
||||
if (spaceRemaining === 0) {
|
||||
_this.md5.update(_this.bufToStore);
|
||||
var doc = createChunkDoc(_this.id, _this.n, _this.bufToStore);
|
||||
++_this.state.outstandingRequests;
|
||||
++outstandingRequests;
|
||||
|
||||
_this.chunks.insert(doc, getWriteOptions(_this), function(error) {
|
||||
if (error) {
|
||||
return __handleError(_this, error);
|
||||
}
|
||||
--_this.state.outstandingRequests;
|
||||
--outstandingRequests;
|
||||
if (!outstandingRequests) {
|
||||
_this.emit('drain', doc);
|
||||
callback && callback();
|
||||
checkDone(_this);
|
||||
}
|
||||
});
|
||||
|
||||
spaceRemaining = _this.chunkSizeBytes;
|
||||
_this.pos = 0;
|
||||
++_this.n;
|
||||
}
|
||||
inputBufRemaining -= numToCopy;
|
||||
numToCopy = Math.min(spaceRemaining, inputBufRemaining);
|
||||
}
|
||||
|
||||
// Note that we reverse the typical semantics of write's return value
|
||||
// to be compatible with node's `.pipe()` function.
|
||||
// False means the client should wait for the 'drain' event.
|
||||
return false;
|
||||
}
|
||||
|
||||
/**
|
||||
* @ignore
|
||||
*/
|
||||
|
||||
function getWriteOptions(_this) {
|
||||
var obj = {};
|
||||
if (_this.options.writeConcern) {
|
||||
obj.w = concern.w;
|
||||
obj.wtimeout = concern.wtimeout;
|
||||
obj.j = concern.j;
|
||||
}
|
||||
return obj;
|
||||
}
|
||||
|
||||
/**
|
||||
* @ignore
|
||||
*/
|
||||
|
||||
function waitForIndexes(_this, callback) {
|
||||
if (_this.bucket.s.checkedIndexes) {
|
||||
callback(false);
|
||||
}
|
||||
|
||||
_this.bucket.once('index', function() {
|
||||
callback(true);
|
||||
});
|
||||
|
||||
return true;
|
||||
}
|
||||
|
||||
/**
|
||||
* @ignore
|
||||
*/
|
||||
|
||||
function writeRemnant(_this, callback) {
|
||||
// Buffer is empty, so don't bother to insert
|
||||
if (_this.pos === 0) {
|
||||
return checkDone(_this, callback);
|
||||
}
|
||||
|
||||
++_this.state.outstandingRequests;
|
||||
|
||||
// Create a new buffer to make sure the buffer isn't bigger than it needs
|
||||
// to be.
|
||||
var remnant = new Buffer(_this.pos);
|
||||
_this.bufToStore.copy(remnant, 0, 0, _this.pos);
|
||||
_this.md5.update(remnant);
|
||||
var doc = createChunkDoc(_this.id, _this.n, remnant);
|
||||
|
||||
_this.chunks.insert(doc, getWriteOptions(_this), function(error) {
|
||||
if (error) {
|
||||
return __handleError(_this, error);
|
||||
}
|
||||
--_this.state.outstandingRequests;
|
||||
checkDone(_this);
|
||||
});
|
||||
}
|
||||
233
node_modules/mongodb/lib/gridfs/chunk.js
generated
vendored
Normal file
233
node_modules/mongodb/lib/gridfs/chunk.js
generated
vendored
Normal file
@@ -0,0 +1,233 @@
|
||||
"use strict";
|
||||
|
||||
var Binary = require('mongodb-core').BSON.Binary,
|
||||
ObjectID = require('mongodb-core').BSON.ObjectID;
|
||||
|
||||
/**
|
||||
* Class for representing a single chunk in GridFS.
|
||||
*
|
||||
* @class
|
||||
*
|
||||
* @param file {GridStore} The {@link GridStore} object holding this chunk.
|
||||
* @param mongoObject {object} The mongo object representation of this chunk.
|
||||
*
|
||||
* @throws Error when the type of data field for {@link mongoObject} is not
|
||||
* supported. Currently supported types for data field are instances of
|
||||
* {@link String}, {@link Array}, {@link Binary} and {@link Binary}
|
||||
* from the bson module
|
||||
*
|
||||
* @see Chunk#buildMongoObject
|
||||
*/
|
||||
var Chunk = function(file, mongoObject, writeConcern) {
|
||||
if(!(this instanceof Chunk)) return new Chunk(file, mongoObject);
|
||||
|
||||
this.file = file;
|
||||
var self = this;
|
||||
var mongoObjectFinal = mongoObject == null ? {} : mongoObject;
|
||||
this.writeConcern = writeConcern || {w:1};
|
||||
this.objectId = mongoObjectFinal._id == null ? new ObjectID() : mongoObjectFinal._id;
|
||||
this.chunkNumber = mongoObjectFinal.n == null ? 0 : mongoObjectFinal.n;
|
||||
this.data = new Binary();
|
||||
|
||||
if(mongoObjectFinal.data == null) {
|
||||
} else if(typeof mongoObjectFinal.data == "string") {
|
||||
var buffer = new Buffer(mongoObjectFinal.data.length);
|
||||
buffer.write(mongoObjectFinal.data, 0, mongoObjectFinal.data.length, 'binary');
|
||||
this.data = new Binary(buffer);
|
||||
} else if(Array.isArray(mongoObjectFinal.data)) {
|
||||
var buffer = new Buffer(mongoObjectFinal.data.length);
|
||||
var data = mongoObjectFinal.data.join('');
|
||||
buffer.write(data, 0, data.length, 'binary');
|
||||
this.data = new Binary(buffer);
|
||||
} else if(mongoObjectFinal.data._bsontype === 'Binary') {
|
||||
this.data = mongoObjectFinal.data;
|
||||
} else if(Buffer.isBuffer(mongoObjectFinal.data)) {
|
||||
} else {
|
||||
throw Error("Illegal chunk format");
|
||||
}
|
||||
|
||||
// Update position
|
||||
this.internalPosition = 0;
|
||||
};
|
||||
|
||||
/**
|
||||
* Writes a data to this object and advance the read/write head.
|
||||
*
|
||||
* @param data {string} the data to write
|
||||
* @param callback {function(*, GridStore)} This will be called after executing
|
||||
* this method. The first parameter will contain null and the second one
|
||||
* will contain a reference to this object.
|
||||
*/
|
||||
Chunk.prototype.write = function(data, callback) {
|
||||
this.data.write(data, this.internalPosition, data.length, 'binary');
|
||||
this.internalPosition = this.data.length();
|
||||
if(callback != null) return callback(null, this);
|
||||
return this;
|
||||
};
|
||||
|
||||
/**
|
||||
* Reads data and advances the read/write head.
|
||||
*
|
||||
* @param length {number} The length of data to read.
|
||||
*
|
||||
* @return {string} The data read if the given length will not exceed the end of
|
||||
* the chunk. Returns an empty String otherwise.
|
||||
*/
|
||||
Chunk.prototype.read = function(length) {
|
||||
// Default to full read if no index defined
|
||||
length = length == null || length == 0 ? this.length() : length;
|
||||
|
||||
if(this.length() - this.internalPosition + 1 >= length) {
|
||||
var data = this.data.read(this.internalPosition, length);
|
||||
this.internalPosition = this.internalPosition + length;
|
||||
return data;
|
||||
} else {
|
||||
return '';
|
||||
}
|
||||
};
|
||||
|
||||
Chunk.prototype.readSlice = function(length) {
|
||||
if ((this.length() - this.internalPosition) >= length) {
|
||||
var data = null;
|
||||
if (this.data.buffer != null) { //Pure BSON
|
||||
data = this.data.buffer.slice(this.internalPosition, this.internalPosition + length);
|
||||
} else { //Native BSON
|
||||
data = new Buffer(length);
|
||||
length = this.data.readInto(data, this.internalPosition);
|
||||
}
|
||||
this.internalPosition = this.internalPosition + length;
|
||||
return data;
|
||||
} else {
|
||||
return null;
|
||||
}
|
||||
};
|
||||
|
||||
/**
|
||||
* Checks if the read/write head is at the end.
|
||||
*
|
||||
* @return {boolean} Whether the read/write head has reached the end of this
|
||||
* chunk.
|
||||
*/
|
||||
Chunk.prototype.eof = function() {
|
||||
return this.internalPosition == this.length() ? true : false;
|
||||
};
|
||||
|
||||
/**
|
||||
* Reads one character from the data of this chunk and advances the read/write
|
||||
* head.
|
||||
*
|
||||
* @return {string} a single character data read if the the read/write head is
|
||||
* not at the end of the chunk. Returns an empty String otherwise.
|
||||
*/
|
||||
Chunk.prototype.getc = function() {
|
||||
return this.read(1);
|
||||
};
|
||||
|
||||
/**
|
||||
* Clears the contents of the data in this chunk and resets the read/write head
|
||||
* to the initial position.
|
||||
*/
|
||||
Chunk.prototype.rewind = function() {
|
||||
this.internalPosition = 0;
|
||||
this.data = new Binary();
|
||||
};
|
||||
|
||||
/**
|
||||
* Saves this chunk to the database. Also overwrites existing entries having the
|
||||
* same id as this chunk.
|
||||
*
|
||||
* @param callback {function(*, GridStore)} This will be called after executing
|
||||
* this method. The first parameter will contain null and the second one
|
||||
* will contain a reference to this object.
|
||||
*/
|
||||
Chunk.prototype.save = function(options, callback) {
|
||||
var self = this;
|
||||
if(typeof options == 'function') {
|
||||
callback = options;
|
||||
options = {};
|
||||
}
|
||||
|
||||
self.file.chunkCollection(function(err, collection) {
|
||||
if(err) return callback(err);
|
||||
|
||||
// Merge the options
|
||||
var writeOptions = { upsert: true };
|
||||
for(var name in options) writeOptions[name] = options[name];
|
||||
for(var name in self.writeConcern) writeOptions[name] = self.writeConcern[name];
|
||||
|
||||
if(self.data.length() > 0) {
|
||||
self.buildMongoObject(function(mongoObject) {
|
||||
var options = {forceServerObjectId:true};
|
||||
for(var name in self.writeConcern) {
|
||||
options[name] = self.writeConcern[name];
|
||||
}
|
||||
|
||||
collection.replaceOne({'_id':self.objectId}, mongoObject, writeOptions, function(err, collection) {
|
||||
callback(err, self);
|
||||
});
|
||||
});
|
||||
} else {
|
||||
callback(null, self);
|
||||
}
|
||||
// });
|
||||
});
|
||||
};
|
||||
|
||||
/**
|
||||
* Creates a mongoDB object representation of this chunk.
|
||||
*
|
||||
* @param callback {function(Object)} This will be called after executing this
|
||||
* method. The object will be passed to the first parameter and will have
|
||||
* the structure:
|
||||
*
|
||||
* <pre><code>
|
||||
* {
|
||||
* '_id' : , // {number} id for this chunk
|
||||
* 'files_id' : , // {number} foreign key to the file collection
|
||||
* 'n' : , // {number} chunk number
|
||||
* 'data' : , // {bson#Binary} the chunk data itself
|
||||
* }
|
||||
* </code></pre>
|
||||
*
|
||||
* @see <a href="http://www.mongodb.org/display/DOCS/GridFS+Specification#GridFSSpecification-{{chunks}}">MongoDB GridFS Chunk Object Structure</a>
|
||||
*/
|
||||
Chunk.prototype.buildMongoObject = function(callback) {
|
||||
var mongoObject = {
|
||||
'files_id': this.file.fileId,
|
||||
'n': this.chunkNumber,
|
||||
'data': this.data};
|
||||
// If we are saving using a specific ObjectId
|
||||
if(this.objectId != null) mongoObject._id = this.objectId;
|
||||
|
||||
callback(mongoObject);
|
||||
};
|
||||
|
||||
/**
|
||||
* @return {number} the length of the data
|
||||
*/
|
||||
Chunk.prototype.length = function() {
|
||||
return this.data.length();
|
||||
};
|
||||
|
||||
/**
|
||||
* The position of the read/write head
|
||||
* @name position
|
||||
* @lends Chunk#
|
||||
* @field
|
||||
*/
|
||||
Object.defineProperty(Chunk.prototype, "position", { enumerable: true
|
||||
, get: function () {
|
||||
return this.internalPosition;
|
||||
}
|
||||
, set: function(value) {
|
||||
this.internalPosition = value;
|
||||
}
|
||||
});
|
||||
|
||||
/**
|
||||
* The default chunk size
|
||||
* @constant
|
||||
*/
|
||||
Chunk.DEFAULT_CHUNK_SIZE = 1024 * 255;
|
||||
|
||||
module.exports = Chunk;
|
||||
1956
node_modules/mongodb/lib/gridfs/grid_store.js
generated
vendored
Normal file
1956
node_modules/mongodb/lib/gridfs/grid_store.js
generated
vendored
Normal file
File diff suppressed because it is too large
Load Diff
64
node_modules/mongodb/lib/metadata.js
generated
vendored
Normal file
64
node_modules/mongodb/lib/metadata.js
generated
vendored
Normal file
@@ -0,0 +1,64 @@
|
||||
var f = require('util').format;
|
||||
|
||||
var Define = function(name, object, stream) {
|
||||
this.name = name;
|
||||
this.object = object;
|
||||
this.stream = typeof stream == 'boolean' ? stream : false;
|
||||
this.instrumentations = {};
|
||||
}
|
||||
|
||||
Define.prototype.classMethod = function(name, options) {
|
||||
var keys = Object.keys(options).sort();
|
||||
var key = generateKey(keys, options);
|
||||
|
||||
// Add a list of instrumentations
|
||||
if(this.instrumentations[key] == null) {
|
||||
this.instrumentations[key] = {
|
||||
methods: [], options: options
|
||||
}
|
||||
}
|
||||
|
||||
// Push to list of method for this instrumentation
|
||||
this.instrumentations[key].methods.push(name);
|
||||
}
|
||||
|
||||
var generateKey = function(keys, options) {
|
||||
var parts = [];
|
||||
for(var i = 0; i < keys.length; i++) {
|
||||
parts.push(f('%s=%s', keys[i], options[keys[i]]));
|
||||
}
|
||||
|
||||
return parts.join();
|
||||
}
|
||||
|
||||
Define.prototype.staticMethod = function(name, options) {
|
||||
options.static = true;
|
||||
var keys = Object.keys(options).sort();
|
||||
var key = generateKey(keys, options);
|
||||
|
||||
// Add a list of instrumentations
|
||||
if(this.instrumentations[key] == null) {
|
||||
this.instrumentations[key] = {
|
||||
methods: [], options: options
|
||||
}
|
||||
}
|
||||
|
||||
// Push to list of method for this instrumentation
|
||||
this.instrumentations[key].methods.push(name);
|
||||
}
|
||||
|
||||
Define.prototype.generate = function(keys, options) {
|
||||
// Generate the return object
|
||||
var object = {
|
||||
name: this.name, obj: this.object, stream: this.stream,
|
||||
instrumentations: []
|
||||
}
|
||||
|
||||
for(var name in this.instrumentations) {
|
||||
object.instrumentations.push(this.instrumentations[name]);
|
||||
}
|
||||
|
||||
return object;
|
||||
}
|
||||
|
||||
module.exports = Define;
|
||||
472
node_modules/mongodb/lib/mongo_client.js
generated
vendored
Normal file
472
node_modules/mongodb/lib/mongo_client.js
generated
vendored
Normal file
@@ -0,0 +1,472 @@
|
||||
"use strict";
|
||||
|
||||
var parse = require('./url_parser')
|
||||
, Server = require('./server')
|
||||
, Mongos = require('./mongos')
|
||||
, ReplSet = require('./replset')
|
||||
, Define = require('./metadata')
|
||||
, ReadPreference = require('./read_preference')
|
||||
, Db = require('./db');
|
||||
|
||||
/**
|
||||
* @fileOverview The **MongoClient** class is a class that allows for making Connections to MongoDB.
|
||||
*
|
||||
* @example
|
||||
* var MongoClient = require('mongodb').MongoClient,
|
||||
* test = require('assert');
|
||||
* // Connection url
|
||||
* var url = 'mongodb://localhost:27017/test';
|
||||
* // Connect using MongoClient
|
||||
* MongoClient.connect(url, function(err, db) {
|
||||
* // Get an additional db
|
||||
* db.close();
|
||||
* });
|
||||
*/
|
||||
|
||||
/**
|
||||
* Creates a new MongoClient instance
|
||||
* @class
|
||||
* @return {MongoClient} a MongoClient instance.
|
||||
*/
|
||||
function MongoClient() {
|
||||
/**
|
||||
* The callback format for results
|
||||
* @callback MongoClient~connectCallback
|
||||
* @param {MongoError} error An error instance representing the error during the execution.
|
||||
* @param {Db} db The connected database.
|
||||
*/
|
||||
|
||||
/**
|
||||
* Connect to MongoDB using a url as documented at
|
||||
*
|
||||
* docs.mongodb.org/manual/reference/connection-string/
|
||||
*
|
||||
* Note that for replicasets the replicaSet query parameter is required in the 2.0 driver
|
||||
*
|
||||
* @method
|
||||
* @param {string} url The connection URI string
|
||||
* @param {object} [options=null] Optional settings.
|
||||
* @param {boolean} [options.uri_decode_auth=false] Uri decode the user name and password for authentication
|
||||
* @param {object} [options.db=null] A hash of options to set on the db object, see **Db constructor**
|
||||
* @param {object} [options.server=null] A hash of options to set on the server objects, see **Server** constructor**
|
||||
* @param {object} [options.replSet=null] A hash of options to set on the replSet object, see **ReplSet** constructor**
|
||||
* @param {object} [options.mongos=null] A hash of options to set on the mongos object, see **Mongos** constructor**
|
||||
* @param {object} [options.promiseLibrary=null] A Promise library class the application wishes to use such as Bluebird, must be ES6 compatible
|
||||
* @param {MongoClient~connectCallback} [callback] The command result callback
|
||||
* @return {Promise} returns Promise if no callback passed
|
||||
*/
|
||||
this.connect = MongoClient.connect;
|
||||
}
|
||||
|
||||
var define = MongoClient.define = new Define('MongoClient', MongoClient, false);
|
||||
|
||||
/**
|
||||
* Connect to MongoDB using a url as documented at
|
||||
*
|
||||
* docs.mongodb.org/manual/reference/connection-string/
|
||||
*
|
||||
* Note that for replicasets the replicaSet query parameter is required in the 2.0 driver
|
||||
*
|
||||
* @method
|
||||
* @static
|
||||
* @param {string} url The connection URI string
|
||||
* @param {object} [options=null] Optional settings.
|
||||
* @param {boolean} [options.uri_decode_auth=false] Uri decode the user name and password for authentication
|
||||
* @param {object} [options.db=null] A hash of options to set on the db object, see **Db constructor**
|
||||
* @param {object} [options.server=null] A hash of options to set on the server objects, see **Server** constructor**
|
||||
* @param {object} [options.replSet=null] A hash of options to set on the replSet object, see **ReplSet** constructor**
|
||||
* @param {object} [options.mongos=null] A hash of options to set on the mongos object, see **Mongos** constructor**
|
||||
* @param {object} [options.promiseLibrary=null] A Promise library class the application wishes to use such as Bluebird, must be ES6 compatible
|
||||
* @param {MongoClient~connectCallback} [callback] The command result callback
|
||||
* @return {Promise} returns Promise if no callback passed
|
||||
*/
|
||||
MongoClient.connect = function(url, options, callback) {
|
||||
var args = Array.prototype.slice.call(arguments, 1);
|
||||
callback = typeof args[args.length - 1] == 'function' ? args.pop() : null;
|
||||
options = args.length ? args.shift() : null;
|
||||
options = options || {};
|
||||
|
||||
// Get the promiseLibrary
|
||||
var promiseLibrary = options.promiseLibrary;
|
||||
|
||||
// No promise library selected fall back
|
||||
if(!promiseLibrary) {
|
||||
promiseLibrary = typeof global.Promise == 'function' ?
|
||||
global.Promise : require('es6-promise').Promise;
|
||||
}
|
||||
|
||||
// Return a promise
|
||||
if(typeof callback != 'function') {
|
||||
return new promiseLibrary(function(resolve, reject) {
|
||||
connect(url, options, function(err, db) {
|
||||
if(err) return reject(err);
|
||||
resolve(db);
|
||||
});
|
||||
});
|
||||
}
|
||||
|
||||
// Fallback to callback based connect
|
||||
connect(url, options, callback);
|
||||
}
|
||||
|
||||
define.staticMethod('connect', {callback: true, promise:true});
|
||||
|
||||
var connect = function(url, options, callback) {
|
||||
var serverOptions = options.server || {};
|
||||
var mongosOptions = options.mongos || {};
|
||||
var replSetServersOptions = options.replSet || options.replSetServers || {};
|
||||
var dbOptions = options.db || {};
|
||||
|
||||
// If callback is null throw an exception
|
||||
if(callback == null)
|
||||
throw new Error("no callback function provided");
|
||||
|
||||
// Parse the string
|
||||
var object = parse(url, options);
|
||||
|
||||
// Merge in any options for db in options object
|
||||
if(dbOptions) {
|
||||
for(var name in dbOptions) object.db_options[name] = dbOptions[name];
|
||||
}
|
||||
|
||||
// Added the url to the options
|
||||
object.db_options.url = url;
|
||||
|
||||
// Merge in any options for server in options object
|
||||
if(serverOptions) {
|
||||
for(var name in serverOptions) object.server_options[name] = serverOptions[name];
|
||||
}
|
||||
|
||||
// Merge in any replicaset server options
|
||||
if(replSetServersOptions) {
|
||||
for(var name in replSetServersOptions) object.rs_options[name] = replSetServersOptions[name];
|
||||
}
|
||||
|
||||
if(replSetServersOptions.ssl
|
||||
|| replSetServersOptions.sslValidate
|
||||
|| replSetServersOptions.checkServerIdentity
|
||||
|| replSetServersOptions.sslCA
|
||||
|| replSetServersOptions.sslCert
|
||||
|| replSetServersOptions.sslKey
|
||||
|| replSetServersOptions.sslPass) {
|
||||
object.server_options.ssl = replSetServersOptions.ssl;
|
||||
object.server_options.sslValidate = replSetServersOptions.sslValidate;
|
||||
object.server_options.checkServerIdentity = replSetServersOptions.checkServerIdentity;
|
||||
object.server_options.sslCA = replSetServersOptions.sslCA;
|
||||
object.server_options.sslCert = replSetServersOptions.sslCert;
|
||||
object.server_options.sslKey = replSetServersOptions.sslKey;
|
||||
object.server_options.sslPass = replSetServersOptions.sslPass;
|
||||
}
|
||||
|
||||
// Merge in any replicaset server options
|
||||
if(mongosOptions) {
|
||||
for(var name in mongosOptions) object.mongos_options[name] = mongosOptions[name];
|
||||
}
|
||||
|
||||
if(typeof object.server_options.poolSize == 'number') {
|
||||
if(!object.mongos_options.poolSize) object.mongos_options.poolSize = object.server_options.poolSize;
|
||||
if(!object.rs_options.poolSize) object.rs_options.poolSize = object.server_options.poolSize;
|
||||
}
|
||||
|
||||
if(mongosOptions.ssl
|
||||
|| mongosOptions.sslValidate
|
||||
|| mongosOptions.checkServerIdentity
|
||||
|| mongosOptions.sslCA
|
||||
|| mongosOptions.sslCert
|
||||
|| mongosOptions.sslKey
|
||||
|| mongosOptions.sslPass) {
|
||||
object.server_options.ssl = mongosOptions.ssl;
|
||||
object.server_options.sslValidate = mongosOptions.sslValidate;
|
||||
object.server_options.checkServerIdentity = mongosOptions.checkServerIdentity;
|
||||
object.server_options.sslCA = mongosOptions.sslCA;
|
||||
object.server_options.sslCert = mongosOptions.sslCert;
|
||||
object.server_options.sslKey = mongosOptions.sslKey;
|
||||
object.server_options.sslPass = mongosOptions.sslPass;
|
||||
}
|
||||
|
||||
// Set the promise library
|
||||
object.db_options.promiseLibrary = options.promiseLibrary;
|
||||
|
||||
// We need to ensure that the list of servers are only either direct members or mongos
|
||||
// they cannot be a mix of monogs and mongod's
|
||||
var totalNumberOfServers = object.servers.length;
|
||||
var totalNumberOfMongosServers = 0;
|
||||
var totalNumberOfMongodServers = 0;
|
||||
var serverConfig = null;
|
||||
var errorServers = {};
|
||||
|
||||
// Failure modes
|
||||
if(object.servers.length == 0) throw new Error("connection string must contain at least one seed host");
|
||||
|
||||
// If we have no db setting for the native parser try to set the c++ one first
|
||||
object.db_options.native_parser = _setNativeParser(object.db_options);
|
||||
// If no auto_reconnect is set, set it to true as default for single servers
|
||||
if(typeof object.server_options.auto_reconnect != 'boolean') {
|
||||
object.server_options.auto_reconnect = true;
|
||||
}
|
||||
|
||||
// If we have more than a server, it could be replicaset or mongos list
|
||||
// need to verify that it's one or the other and fail if it's a mix
|
||||
// Connect to all servers and run ismaster
|
||||
for(var i = 0; i < object.servers.length; i++) {
|
||||
// Set up socket options
|
||||
var providedSocketOptions = object.server_options.socketOptions || {};
|
||||
|
||||
var _server_options = {
|
||||
poolSize:1
|
||||
, socketOptions: {
|
||||
connectTimeoutMS: providedSocketOptions.connectTimeoutMS || (1000 * 120)
|
||||
, socketTimeoutMS: providedSocketOptions.socketTimeoutMS || (1000 * 120)
|
||||
}
|
||||
, auto_reconnect:false};
|
||||
|
||||
// Ensure we have ssl setup for the servers
|
||||
if(object.server_options.ssl) {
|
||||
_server_options.ssl = object.server_options.ssl;
|
||||
_server_options.sslValidate = object.server_options.sslValidate;
|
||||
_server_options.checkServerIdentity = object.server_options.checkServerIdentity;
|
||||
_server_options.sslCA = object.server_options.sslCA;
|
||||
_server_options.sslCert = object.server_options.sslCert;
|
||||
_server_options.sslKey = object.server_options.sslKey;
|
||||
_server_options.sslPass = object.server_options.sslPass;
|
||||
} else if(object.rs_options.ssl) {
|
||||
_server_options.ssl = object.rs_options.ssl;
|
||||
_server_options.sslValidate = object.rs_options.sslValidate;
|
||||
_server_options.checkServerIdentity = object.rs_options.checkServerIdentity;
|
||||
_server_options.sslCA = object.rs_options.sslCA;
|
||||
_server_options.sslCert = object.rs_options.sslCert;
|
||||
_server_options.sslKey = object.rs_options.sslKey;
|
||||
_server_options.sslPass = object.rs_options.sslPass;
|
||||
}
|
||||
|
||||
// Error
|
||||
var error = null;
|
||||
// Set up the Server object
|
||||
var _server = object.servers[i].domain_socket
|
||||
? new Server(object.servers[i].domain_socket, _server_options)
|
||||
: new Server(object.servers[i].host, object.servers[i].port, _server_options);
|
||||
|
||||
var connectFunction = function(__server) {
|
||||
// Attempt connect
|
||||
new Db(object.dbName, __server, {w:1, native_parser:false, promiseLibrary:options.promiseLibrary}).open(function(err, db) {
|
||||
// Update number of servers
|
||||
totalNumberOfServers = totalNumberOfServers - 1;
|
||||
|
||||
// If no error do the correct checks
|
||||
if(!err) {
|
||||
// Close the connection
|
||||
db.close();
|
||||
// Get the last ismaster document
|
||||
var isMasterDoc = db.serverConfig.isMasterDoc;
|
||||
|
||||
// Check what type of server we have
|
||||
if(isMasterDoc.setName) {
|
||||
totalNumberOfMongodServers++;
|
||||
}
|
||||
|
||||
if(isMasterDoc.msg && isMasterDoc.msg == "isdbgrid") totalNumberOfMongosServers++;
|
||||
} else {
|
||||
error = err;
|
||||
errorServers[__server.host + ":" + __server.port] = __server;
|
||||
}
|
||||
|
||||
if(totalNumberOfServers == 0) {
|
||||
// Error out
|
||||
if(totalNumberOfMongodServers == 0 && totalNumberOfMongosServers == 0 && error) {
|
||||
return callback(error, null);
|
||||
}
|
||||
|
||||
// If we have a mix of mongod and mongos, throw an error
|
||||
if(totalNumberOfMongosServers > 0 && totalNumberOfMongodServers > 0) {
|
||||
if(db) db.close();
|
||||
return process.nextTick(function() {
|
||||
try {
|
||||
callback(new Error("cannot combine a list of replicaset seeds and mongos seeds"));
|
||||
} catch (err) {
|
||||
throw err
|
||||
}
|
||||
})
|
||||
}
|
||||
|
||||
if(totalNumberOfMongodServers == 0
|
||||
&& totalNumberOfMongosServers == 0
|
||||
&& object.servers.length == 1
|
||||
&& (!object.rs_options.replicaSet || !object.rs_options.rs_name)) {
|
||||
|
||||
var obj = object.servers[0];
|
||||
serverConfig = obj.domain_socket ?
|
||||
new Server(obj.domain_socket, object.server_options)
|
||||
: new Server(obj.host, obj.port, object.server_options);
|
||||
|
||||
} else if(totalNumberOfMongodServers > 0
|
||||
|| totalNumberOfMongosServers > 0
|
||||
|| object.rs_options.replicaSet || object.rs_options.rs_name) {
|
||||
|
||||
var finalServers = object.servers
|
||||
.filter(function(serverObj) {
|
||||
return errorServers[serverObj.host + ":" + serverObj.port] == null;
|
||||
})
|
||||
.map(function(serverObj) {
|
||||
return serverObj.domain_socket ?
|
||||
new Server(serverObj.domain_socket, 27017, object.server_options)
|
||||
: new Server(serverObj.host, serverObj.port, object.server_options);
|
||||
});
|
||||
|
||||
// Clean out any error servers
|
||||
errorServers = {};
|
||||
|
||||
// Set up the final configuration
|
||||
if(totalNumberOfMongodServers > 0) {
|
||||
try {
|
||||
|
||||
// If no replicaset name was provided, we wish to perform a
|
||||
// direct connection
|
||||
if(totalNumberOfMongodServers == 1
|
||||
&& (!object.rs_options.replicaSet && !object.rs_options.rs_name)) {
|
||||
serverConfig = finalServers[0];
|
||||
} else if(totalNumberOfMongodServers == 1) {
|
||||
object.rs_options.replicaSet = object.rs_options.replicaSet || object.rs_options.rs_name;
|
||||
serverConfig = new ReplSet(finalServers, object.rs_options);
|
||||
} else {
|
||||
serverConfig = new ReplSet(finalServers, object.rs_options);
|
||||
}
|
||||
|
||||
} catch(err) {
|
||||
return callback(err, null);
|
||||
}
|
||||
} else {
|
||||
serverConfig = new Mongos(finalServers, object.mongos_options);
|
||||
}
|
||||
}
|
||||
|
||||
if(serverConfig == null) {
|
||||
return process.nextTick(function() {
|
||||
try {
|
||||
callback(new Error("Could not locate any valid servers in initial seed list"));
|
||||
} catch (err) {
|
||||
if(db) db.close();
|
||||
throw err
|
||||
}
|
||||
});
|
||||
}
|
||||
|
||||
// Ensure no firing of open event before we are ready
|
||||
serverConfig.emitOpen = false;
|
||||
// Set up all options etc and connect to the database
|
||||
_finishConnecting(serverConfig, object, options, callback)
|
||||
}
|
||||
});
|
||||
}
|
||||
|
||||
// Wrap the context of the call
|
||||
connectFunction(_server);
|
||||
}
|
||||
}
|
||||
|
||||
var _setNativeParser = function(db_options) {
|
||||
if(typeof db_options.native_parser == 'boolean') return db_options.native_parser;
|
||||
|
||||
try {
|
||||
require('mongodb-core').BSON.BSONNative.BSON;
|
||||
return true;
|
||||
} catch(err) {
|
||||
return false;
|
||||
}
|
||||
}
|
||||
|
||||
var _finishConnecting = function(serverConfig, object, options, callback) {
|
||||
// If we have a readPreference passed in by the db options
|
||||
if(typeof object.db_options.readPreference == 'string') {
|
||||
object.db_options.readPreference = new ReadPreference(object.db_options.readPreference);
|
||||
} else if(typeof object.db_options.read_preference == 'string') {
|
||||
object.db_options.readPreference = new ReadPreference(object.db_options.read_preference);
|
||||
}
|
||||
|
||||
// Do we have readPreference tags
|
||||
if(object.db_options.readPreference && object.db_options.readPreferenceTags) {
|
||||
object.db_options.readPreference.tags = object.db_options.readPreferenceTags;
|
||||
} else if(object.db_options.readPreference && object.db_options.read_preference_tags) {
|
||||
object.db_options.readPreference.tags = object.db_options.read_preference_tags;
|
||||
}
|
||||
|
||||
// Get the socketTimeoutMS
|
||||
var socketTimeoutMS = object.server_options.socketOptions.socketTimeoutMS || 0;
|
||||
|
||||
// If we have a replset, override with replicaset socket timeout option if available
|
||||
if(serverConfig instanceof ReplSet) {
|
||||
socketTimeoutMS = object.rs_options.socketOptions.socketTimeoutMS || socketTimeoutMS;
|
||||
}
|
||||
|
||||
// Set socketTimeout to the same as the connectTimeoutMS or 30 sec
|
||||
serverConfig.connectTimeoutMS = serverConfig.connectTimeoutMS || 30000;
|
||||
serverConfig.socketTimeoutMS = serverConfig.connectTimeoutMS;
|
||||
|
||||
// Set up the db options
|
||||
var db = new Db(object.dbName, serverConfig, object.db_options);
|
||||
// Open the db
|
||||
db.open(function(err, db){
|
||||
|
||||
if(err) {
|
||||
return process.nextTick(function() {
|
||||
try {
|
||||
callback(err, null);
|
||||
} catch (err) {
|
||||
if(db) db.close();
|
||||
throw err
|
||||
}
|
||||
});
|
||||
}
|
||||
|
||||
// Reset the socket timeout
|
||||
serverConfig.socketTimeoutMS = socketTimeoutMS || 0;
|
||||
|
||||
// Return object
|
||||
if(err == null && object.auth){
|
||||
// What db to authenticate against
|
||||
var authentication_db = db;
|
||||
if(object.db_options && object.db_options.authSource) {
|
||||
authentication_db = db.db(object.db_options.authSource);
|
||||
}
|
||||
|
||||
// Build options object
|
||||
var options = {};
|
||||
if(object.db_options.authMechanism) options.authMechanism = object.db_options.authMechanism;
|
||||
if(object.db_options.gssapiServiceName) options.gssapiServiceName = object.db_options.gssapiServiceName;
|
||||
|
||||
// Authenticate
|
||||
authentication_db.authenticate(object.auth.user, object.auth.password, options, function(err, success){
|
||||
if(success){
|
||||
process.nextTick(function() {
|
||||
try {
|
||||
callback(null, db);
|
||||
} catch (err) {
|
||||
if(db) db.close();
|
||||
throw err
|
||||
}
|
||||
});
|
||||
} else {
|
||||
if(db) db.close();
|
||||
process.nextTick(function() {
|
||||
try {
|
||||
callback(err ? err : new Error('Could not authenticate user ' + object.auth[0]), null);
|
||||
} catch (err) {
|
||||
if(db) db.close();
|
||||
throw err
|
||||
}
|
||||
});
|
||||
}
|
||||
});
|
||||
} else {
|
||||
process.nextTick(function() {
|
||||
try {
|
||||
callback(err, db);
|
||||
} catch (err) {
|
||||
if(db) db.close();
|
||||
throw err
|
||||
}
|
||||
})
|
||||
}
|
||||
});
|
||||
}
|
||||
|
||||
module.exports = MongoClient
|
||||
499
node_modules/mongodb/lib/mongos.js
generated
vendored
Normal file
499
node_modules/mongodb/lib/mongos.js
generated
vendored
Normal file
@@ -0,0 +1,499 @@
|
||||
"use strict";
|
||||
|
||||
var EventEmitter = require('events').EventEmitter
|
||||
, inherits = require('util').inherits
|
||||
, f = require('util').format
|
||||
, ServerCapabilities = require('./topology_base').ServerCapabilities
|
||||
, MongoCR = require('mongodb-core').MongoCR
|
||||
, MongoError = require('mongodb-core').MongoError
|
||||
, CMongos = require('mongodb-core').Mongos
|
||||
, Cursor = require('./cursor')
|
||||
, AggregationCursor = require('./aggregation_cursor')
|
||||
, CommandCursor = require('./command_cursor')
|
||||
, Define = require('./metadata')
|
||||
, Server = require('./server')
|
||||
, Store = require('./topology_base').Store
|
||||
, shallowClone = require('./utils').shallowClone;
|
||||
|
||||
/**
|
||||
* @fileOverview The **Mongos** class is a class that represents a Mongos Proxy topology and is
|
||||
* used to construct connections.
|
||||
*
|
||||
* **Mongos Should not be used, use MongoClient.connect**
|
||||
* @example
|
||||
* var Db = require('mongodb').Db,
|
||||
* Mongos = require('mongodb').Mongos,
|
||||
* Server = require('mongodb').Server,
|
||||
* test = require('assert');
|
||||
* // Connect using Mongos
|
||||
* var server = new Server('localhost', 27017);
|
||||
* var db = new Db('test', new Mongos([server]));
|
||||
* db.open(function(err, db) {
|
||||
* // Get an additional db
|
||||
* db.close();
|
||||
* });
|
||||
*/
|
||||
|
||||
/**
|
||||
* Creates a new Mongos instance
|
||||
* @class
|
||||
* @deprecated
|
||||
* @param {Server[]} servers A seedlist of servers participating in the replicaset.
|
||||
* @param {object} [options=null] Optional settings.
|
||||
* @param {booelan} [options.ha=true] Turn on high availability monitoring.
|
||||
* @param {number} [options.haInterval=5000] Time between each replicaset status check.
|
||||
* @param {number} [options.poolSize=5] Number of connections in the connection pool for each server instance, set to 5 as default for legacy reasons.
|
||||
* @param {boolean} [options.ssl=false] Use ssl connection (needs to have a mongod server with ssl support)
|
||||
* @param {boolean|function} [options.checkServerIdentity=true] Ensure we check server identify during SSL, set to false to disable checking. Only works for Node 0.12.x or higher. You can pass in a boolean or your own checkServerIdentity override function.
|
||||
* @param {object} [options.sslValidate=true] Validate mongod server certificate against ca (needs to have a mongod server with ssl support, 2.4 or higher)
|
||||
* @param {array} [options.sslCA=null] Array of valid certificates either as Buffers or Strings (needs to have a mongod server with ssl support, 2.4 or higher)
|
||||
* @param {(Buffer|string)} [options.sslCert=null] String or buffer containing the certificate we wish to present (needs to have a mongod server with ssl support, 2.4 or higher)
|
||||
* @param {(Buffer|string)} [options.sslKey=null] String or buffer containing the certificate private key we wish to present (needs to have a mongod server with ssl support, 2.4 or higher)
|
||||
* @param {(Buffer|string)} [options.sslPass=null] String or buffer containing the certificate password (needs to have a mongod server with ssl support, 2.4 or higher)
|
||||
* @param {object} [options.socketOptions=null] Socket options
|
||||
* @param {boolean} [options.socketOptions.noDelay=true] TCP Socket NoDelay option.
|
||||
* @param {number} [options.socketOptions.keepAlive=0] TCP KeepAlive on the socket with a X ms delay before start.
|
||||
* @param {number} [options.socketOptions.connectTimeoutMS=0] TCP Connection timeout setting
|
||||
* @param {number} [options.socketOptions.socketTimeoutMS=0] TCP Socket timeout setting
|
||||
* @fires Mongos#connect
|
||||
* @fires Mongos#ha
|
||||
* @fires Mongos#joined
|
||||
* @fires Mongos#left
|
||||
* @fires Mongos#fullsetup
|
||||
* @fires Mongos#open
|
||||
* @fires Mongos#close
|
||||
* @fires Mongos#error
|
||||
* @fires Mongos#timeout
|
||||
* @fires Mongos#parseError
|
||||
* @return {Mongos} a Mongos instance.
|
||||
*/
|
||||
var Mongos = function(servers, options) {
|
||||
if(!(this instanceof Mongos)) return new Mongos(servers, options);
|
||||
options = options || {};
|
||||
var self = this;
|
||||
|
||||
// Ensure all the instances are Server
|
||||
for(var i = 0; i < servers.length; i++) {
|
||||
if(!(servers[i] instanceof Server)) {
|
||||
throw MongoError.create({message: "all seed list instances must be of the Server type", driver:true});
|
||||
}
|
||||
}
|
||||
|
||||
// Store option defaults
|
||||
var storeOptions = {
|
||||
force: false
|
||||
, bufferMaxEntries: -1
|
||||
}
|
||||
|
||||
// Shared global store
|
||||
var store = options.store || new Store(self, storeOptions);
|
||||
|
||||
// Set up event emitter
|
||||
EventEmitter.call(this);
|
||||
|
||||
// Debug tag
|
||||
var tag = options.tag;
|
||||
|
||||
// Build seed list
|
||||
var seedlist = servers.map(function(x) {
|
||||
return {host: x.host, port: x.port}
|
||||
});
|
||||
|
||||
// Final options
|
||||
var finalOptions = shallowClone(options);
|
||||
|
||||
// Default values
|
||||
finalOptions.size = typeof options.poolSize == 'number' ? options.poolSize : 5;
|
||||
finalOptions.reconnect = typeof options.auto_reconnect == 'boolean' ? options.auto_reconnect : true;
|
||||
finalOptions.emitError = typeof options.emitError == 'boolean' ? options.emitError : true;
|
||||
finalOptions.cursorFactory = Cursor;
|
||||
|
||||
// Add the store
|
||||
finalOptions.disconnectHandler = store;
|
||||
|
||||
// Ensure we change the sslCA option to ca if available
|
||||
if(options.sslCA) finalOptions.ca = options.sslCA;
|
||||
if(typeof options.sslValidate == 'boolean') finalOptions.rejectUnauthorized = options.sslValidate;
|
||||
if(options.sslKey) finalOptions.key = options.sslKey;
|
||||
if(options.sslCert) finalOptions.cert = options.sslCert;
|
||||
if(options.sslPass) finalOptions.passphrase = options.sslPass;
|
||||
if(options.checkServerIdentity) finalOptions.checkServerIdentity = options.checkServerIdentity;
|
||||
|
||||
// Socket options passed down
|
||||
if(options.socketOptions) {
|
||||
if(options.socketOptions.connectTimeoutMS) {
|
||||
this.connectTimeoutMS = options.socketOptions.connectTimeoutMS;
|
||||
finalOptions.connectionTimeout = options.socketOptions.connectTimeoutMS;
|
||||
}
|
||||
if(options.socketOptions.socketTimeoutMS)
|
||||
finalOptions.socketTimeout = options.socketOptions.socketTimeoutMS;
|
||||
}
|
||||
|
||||
// Are we running in debug mode
|
||||
var debug = typeof options.debug == 'boolean' ? options.debug : false;
|
||||
if(debug) {
|
||||
finalOptions.debug = debug;
|
||||
}
|
||||
|
||||
// Map keep alive setting
|
||||
if(options.socketOptions && typeof options.socketOptions.keepAlive == 'number') {
|
||||
finalOptions.keepAlive = true;
|
||||
if(typeof options.socketOptions.keepAlive == 'number') {
|
||||
finalOptions.keepAliveInitialDelay = options.socketOptions.keepAlive;
|
||||
}
|
||||
}
|
||||
|
||||
// Connection timeout
|
||||
if(options.socketOptions && typeof options.socketOptions.connectionTimeout == 'number') {
|
||||
finalOptions.connectionTimeout = options.socketOptions.connectionTimeout;
|
||||
}
|
||||
|
||||
// Socket timeout
|
||||
if(options.socketOptions && typeof options.socketOptions.socketTimeout == 'number') {
|
||||
finalOptions.socketTimeout = options.socketOptions.socketTimeout;
|
||||
}
|
||||
|
||||
// noDelay
|
||||
if(options.socketOptions && typeof options.socketOptions.noDelay == 'boolean') {
|
||||
finalOptions.noDelay = options.socketOptions.noDelay;
|
||||
}
|
||||
|
||||
if(typeof options.secondaryAcceptableLatencyMS == 'number') {
|
||||
finalOptions.acceptableLatency = options.secondaryAcceptableLatencyMS;
|
||||
}
|
||||
|
||||
// Add the non connection store
|
||||
finalOptions.disconnectHandler = store;
|
||||
|
||||
// Create the Mongos
|
||||
var mongos = new CMongos(seedlist, finalOptions)
|
||||
// Server capabilities
|
||||
var sCapabilities = null;
|
||||
// Add auth prbufferMaxEntriesoviders
|
||||
mongos.addAuthProvider('mongocr', new MongoCR());
|
||||
|
||||
// Internal state
|
||||
this.s = {
|
||||
// Create the Mongos
|
||||
mongos: mongos
|
||||
// Server capabilities
|
||||
, sCapabilities: sCapabilities
|
||||
// Debug turned on
|
||||
, debug: debug
|
||||
// Store option defaults
|
||||
, storeOptions: storeOptions
|
||||
// Cloned options
|
||||
, clonedOptions: finalOptions
|
||||
// Actual store of callbacks
|
||||
, store: store
|
||||
// Options
|
||||
, options: options
|
||||
}
|
||||
|
||||
|
||||
// Last ismaster
|
||||
Object.defineProperty(this, 'isMasterDoc', {
|
||||
enumerable:true, get: function() { return self.s.mongos.lastIsMaster(); }
|
||||
});
|
||||
|
||||
// Last ismaster
|
||||
Object.defineProperty(this, 'numberOfConnectedServers', {
|
||||
enumerable:true, get: function() {
|
||||
return self.s.mongos.s.mongosState.connectedServers().length;
|
||||
}
|
||||
});
|
||||
|
||||
// BSON property
|
||||
Object.defineProperty(this, 'bson', {
|
||||
enumerable: true, get: function() {
|
||||
return self.s.mongos.bson;
|
||||
}
|
||||
});
|
||||
|
||||
Object.defineProperty(this, 'haInterval', {
|
||||
enumerable:true, get: function() { return self.s.mongos.haInterval; }
|
||||
});
|
||||
}
|
||||
|
||||
/**
|
||||
* @ignore
|
||||
*/
|
||||
inherits(Mongos, EventEmitter);
|
||||
|
||||
var define = Mongos.define = new Define('Mongos', Mongos, false);
|
||||
|
||||
// Connect
|
||||
Mongos.prototype.connect = function(db, _options, callback) {
|
||||
var self = this;
|
||||
if('function' === typeof _options) callback = _options, _options = {};
|
||||
if(_options == null) _options = {};
|
||||
if(!('function' === typeof callback)) callback = null;
|
||||
self.s.options = _options;
|
||||
|
||||
// Update bufferMaxEntries
|
||||
self.s.storeOptions.bufferMaxEntries = db.bufferMaxEntries;
|
||||
|
||||
// Error handler
|
||||
var connectErrorHandler = function(event) {
|
||||
return function(err) {
|
||||
// Remove all event handlers
|
||||
var events = ['timeout', 'error', 'close'];
|
||||
events.forEach(function(e) {
|
||||
self.removeListener(e, connectErrorHandler);
|
||||
});
|
||||
|
||||
self.s.mongos.removeListener('connect', connectErrorHandler);
|
||||
|
||||
// Try to callback
|
||||
try {
|
||||
callback(err);
|
||||
} catch(err) {
|
||||
process.nextTick(function() { throw err; })
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// Actual handler
|
||||
var errorHandler = function(event) {
|
||||
return function(err) {
|
||||
if(event != 'error') {
|
||||
self.emit(event, err);
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// Error handler
|
||||
var reconnectHandler = function(err) {
|
||||
self.emit('reconnect');
|
||||
self.s.store.execute();
|
||||
}
|
||||
|
||||
// Connect handler
|
||||
var connectHandler = function() {
|
||||
// Clear out all the current handlers left over
|
||||
["timeout", "error", "close"].forEach(function(e) {
|
||||
self.s.mongos.removeAllListeners(e);
|
||||
});
|
||||
|
||||
// Set up listeners
|
||||
self.s.mongos.once('timeout', errorHandler('timeout'));
|
||||
self.s.mongos.once('error', errorHandler('error'));
|
||||
self.s.mongos.once('close', errorHandler('close'));
|
||||
|
||||
// relay the event
|
||||
var relay = function(event) {
|
||||
return function(t, server) {
|
||||
self.emit(event, t, server);
|
||||
}
|
||||
}
|
||||
|
||||
// Set up serverConfig listeners
|
||||
self.s.mongos.on('joined', relay('joined'));
|
||||
self.s.mongos.on('left', relay('left'));
|
||||
self.s.mongos.on('fullsetup', relay('fullsetup'));
|
||||
|
||||
// Emit open event
|
||||
self.emit('open', null, self);
|
||||
|
||||
// Return correctly
|
||||
try {
|
||||
callback(null, self);
|
||||
} catch(err) {
|
||||
process.nextTick(function() { throw err; })
|
||||
}
|
||||
}
|
||||
|
||||
// Set up listeners
|
||||
self.s.mongos.once('timeout', connectErrorHandler('timeout'));
|
||||
self.s.mongos.once('error', connectErrorHandler('error'));
|
||||
self.s.mongos.once('close', connectErrorHandler('close'));
|
||||
self.s.mongos.once('connect', connectHandler);
|
||||
// Reconnect server
|
||||
self.s.mongos.on('reconnect', reconnectHandler);
|
||||
|
||||
// Start connection
|
||||
self.s.mongos.connect(_options);
|
||||
}
|
||||
|
||||
Mongos.prototype.parserType = function() {
|
||||
return this.s.mongos.parserType();
|
||||
}
|
||||
|
||||
define.classMethod('parserType', {callback: false, promise:false, returns: [String]});
|
||||
|
||||
// Server capabilities
|
||||
Mongos.prototype.capabilities = function() {
|
||||
if(this.s.sCapabilities) return this.s.sCapabilities;
|
||||
if(this.s.mongos.lastIsMaster() == null) return null;
|
||||
this.s.sCapabilities = new ServerCapabilities(this.s.mongos.lastIsMaster());
|
||||
return this.s.sCapabilities;
|
||||
}
|
||||
|
||||
define.classMethod('capabilities', {callback: false, promise:false, returns: [ServerCapabilities]});
|
||||
|
||||
// Command
|
||||
Mongos.prototype.command = function(ns, cmd, options, callback) {
|
||||
this.s.mongos.command(ns, cmd, options, callback);
|
||||
}
|
||||
|
||||
define.classMethod('command', {callback: true, promise:false});
|
||||
|
||||
// Insert
|
||||
Mongos.prototype.insert = function(ns, ops, options, callback) {
|
||||
this.s.mongos.insert(ns, ops, options, function(e, m) {
|
||||
callback(e, m)
|
||||
});
|
||||
}
|
||||
|
||||
define.classMethod('insert', {callback: true, promise:false});
|
||||
|
||||
// Update
|
||||
Mongos.prototype.update = function(ns, ops, options, callback) {
|
||||
this.s.mongos.update(ns, ops, options, callback);
|
||||
}
|
||||
|
||||
define.classMethod('update', {callback: true, promise:false});
|
||||
|
||||
// Remove
|
||||
Mongos.prototype.remove = function(ns, ops, options, callback) {
|
||||
this.s.mongos.remove(ns, ops, options, callback);
|
||||
}
|
||||
|
||||
define.classMethod('remove', {callback: true, promise:false});
|
||||
|
||||
// Destroyed
|
||||
Mongos.prototype.isDestroyed = function() {
|
||||
return this.s.mongos.isDestroyed();
|
||||
}
|
||||
|
||||
// IsConnected
|
||||
Mongos.prototype.isConnected = function() {
|
||||
return this.s.mongos.isConnected();
|
||||
}
|
||||
|
||||
define.classMethod('isConnected', {callback: false, promise:false, returns: [Boolean]});
|
||||
|
||||
// Insert
|
||||
Mongos.prototype.cursor = function(ns, cmd, options) {
|
||||
options.disconnectHandler = this.s.store;
|
||||
return this.s.mongos.cursor(ns, cmd, options);
|
||||
}
|
||||
|
||||
define.classMethod('cursor', {callback: false, promise:false, returns: [Cursor, AggregationCursor, CommandCursor]});
|
||||
|
||||
Mongos.prototype.setBSONParserType = function(type) {
|
||||
return this.s.mongos.setBSONParserType(type);
|
||||
}
|
||||
|
||||
Mongos.prototype.lastIsMaster = function() {
|
||||
return this.s.mongos.lastIsMaster();
|
||||
}
|
||||
|
||||
Mongos.prototype.close = function(forceClosed) {
|
||||
this.s.mongos.destroy();
|
||||
// We need to wash out all stored processes
|
||||
if(forceClosed == true) {
|
||||
this.s.storeOptions.force = forceClosed;
|
||||
this.s.store.flush();
|
||||
}
|
||||
}
|
||||
|
||||
define.classMethod('close', {callback: false, promise:false});
|
||||
|
||||
Mongos.prototype.auth = function() {
|
||||
var args = Array.prototype.slice.call(arguments, 0);
|
||||
this.s.mongos.auth.apply(this.s.mongos, args);
|
||||
}
|
||||
|
||||
define.classMethod('auth', {callback: true, promise:false});
|
||||
|
||||
/**
|
||||
* All raw connections
|
||||
* @method
|
||||
* @return {array}
|
||||
*/
|
||||
Mongos.prototype.connections = function() {
|
||||
return this.s.mongos.connections();
|
||||
}
|
||||
|
||||
define.classMethod('connections', {callback: false, promise:false, returns:[Array]});
|
||||
|
||||
/**
|
||||
* A mongos connect event, used to verify that the connection is up and running
|
||||
*
|
||||
* @event Mongos#connect
|
||||
* @type {Mongos}
|
||||
*/
|
||||
|
||||
/**
|
||||
* The mongos high availability event
|
||||
*
|
||||
* @event Mongos#ha
|
||||
* @type {function}
|
||||
* @param {string} type The stage in the high availability event (start|end)
|
||||
* @param {boolean} data.norepeat This is a repeating high availability process or a single execution only
|
||||
* @param {number} data.id The id for this high availability request
|
||||
* @param {object} data.state An object containing the information about the current replicaset
|
||||
*/
|
||||
|
||||
/**
|
||||
* A server member left the mongos set
|
||||
*
|
||||
* @event Mongos#left
|
||||
* @type {function}
|
||||
* @param {string} type The type of member that left (primary|secondary|arbiter)
|
||||
* @param {Server} server The server object that left
|
||||
*/
|
||||
|
||||
/**
|
||||
* A server member joined the mongos set
|
||||
*
|
||||
* @event Mongos#joined
|
||||
* @type {function}
|
||||
* @param {string} type The type of member that joined (primary|secondary|arbiter)
|
||||
* @param {Server} server The server object that joined
|
||||
*/
|
||||
|
||||
/**
|
||||
* Mongos fullsetup event, emitted when all proxies in the topology have been connected to.
|
||||
*
|
||||
* @event Mongos#fullsetup
|
||||
* @type {Mongos}
|
||||
*/
|
||||
|
||||
/**
|
||||
* Mongos open event, emitted when mongos can start processing commands.
|
||||
*
|
||||
* @event Mongos#open
|
||||
* @type {Mongos}
|
||||
*/
|
||||
|
||||
/**
|
||||
* Mongos close event
|
||||
*
|
||||
* @event Mongos#close
|
||||
* @type {object}
|
||||
*/
|
||||
|
||||
/**
|
||||
* Mongos error event, emitted if there is an error listener.
|
||||
*
|
||||
* @event Mongos#error
|
||||
* @type {MongoError}
|
||||
*/
|
||||
|
||||
/**
|
||||
* Mongos timeout event
|
||||
*
|
||||
* @event Mongos#timeout
|
||||
* @type {object}
|
||||
*/
|
||||
|
||||
/**
|
||||
* Mongos parseError event
|
||||
*
|
||||
* @event Mongos#parseError
|
||||
* @type {object}
|
||||
*/
|
||||
|
||||
module.exports = Mongos;
|
||||
104
node_modules/mongodb/lib/read_preference.js
generated
vendored
Normal file
104
node_modules/mongodb/lib/read_preference.js
generated
vendored
Normal file
@@ -0,0 +1,104 @@
|
||||
"use strict";
|
||||
|
||||
/**
|
||||
* @fileOverview The **ReadPreference** class is a class that represents a MongoDB ReadPreference and is
|
||||
* used to construct connections.
|
||||
*
|
||||
* @example
|
||||
* var Db = require('mongodb').Db,
|
||||
* ReplSet = require('mongodb').ReplSet,
|
||||
* Server = require('mongodb').Server,
|
||||
* ReadPreference = require('mongodb').ReadPreference,
|
||||
* test = require('assert');
|
||||
* // Connect using ReplSet
|
||||
* var server = new Server('localhost', 27017);
|
||||
* var db = new Db('test', new ReplSet([server]));
|
||||
* db.open(function(err, db) {
|
||||
* test.equal(null, err);
|
||||
* // Perform a read
|
||||
* var cursor = db.collection('t').find({});
|
||||
* cursor.setReadPreference(ReadPreference.PRIMARY);
|
||||
* cursor.toArray(function(err, docs) {
|
||||
* test.equal(null, err);
|
||||
* db.close();
|
||||
* });
|
||||
* });
|
||||
*/
|
||||
|
||||
/**
|
||||
* Creates a new ReadPreference instance
|
||||
*
|
||||
* Read Preferences
|
||||
* - **ReadPreference.PRIMARY**, Read from primary only. All operations produce an error (throw an exception where applicable) if primary is unavailable. Cannot be combined with tags (This is the default.).
|
||||
* - **ReadPreference.PRIMARY_PREFERRED**, Read from primary if available, otherwise a secondary.
|
||||
* - **ReadPreference.SECONDARY**, Read from secondary if available, otherwise error.
|
||||
* - **ReadPreference.SECONDARY_PREFERRED**, Read from a secondary if available, otherwise read from the primary.
|
||||
* - **ReadPreference.NEAREST**, All modes read from among the nearest candidates, but unlike other modes, NEAREST will include both the primary and all secondaries in the random selection.
|
||||
*
|
||||
* @class
|
||||
* @param {string} mode The ReadPreference mode as listed above.
|
||||
* @param {object} tags An object representing read preference tags.
|
||||
* @property {string} mode The ReadPreference mode.
|
||||
* @property {object} tags The ReadPreference tags.
|
||||
* @return {ReadPreference} a ReadPreference instance.
|
||||
*/
|
||||
var ReadPreference = function(mode, tags) {
|
||||
if(!(this instanceof ReadPreference))
|
||||
return new ReadPreference(mode, tags);
|
||||
this._type = 'ReadPreference';
|
||||
this.mode = mode;
|
||||
this.tags = tags;
|
||||
}
|
||||
|
||||
/**
|
||||
* Validate if a mode is legal
|
||||
*
|
||||
* @method
|
||||
* @param {string} mode The string representing the read preference mode.
|
||||
* @return {boolean}
|
||||
*/
|
||||
ReadPreference.isValid = function(_mode) {
|
||||
return (_mode == ReadPreference.PRIMARY || _mode == ReadPreference.PRIMARY_PREFERRED
|
||||
|| _mode == ReadPreference.SECONDARY || _mode == ReadPreference.SECONDARY_PREFERRED
|
||||
|| _mode == ReadPreference.NEAREST
|
||||
|| _mode == true || _mode == false || _mode == null);
|
||||
}
|
||||
|
||||
/**
|
||||
* Validate if a mode is legal
|
||||
*
|
||||
* @method
|
||||
* @param {string} mode The string representing the read preference mode.
|
||||
* @return {boolean}
|
||||
*/
|
||||
ReadPreference.prototype.isValid = function(mode) {
|
||||
var _mode = typeof mode == 'string' ? mode : this.mode;
|
||||
return ReadPreference.isValid(_mode);
|
||||
}
|
||||
|
||||
/**
|
||||
* @ignore
|
||||
*/
|
||||
ReadPreference.prototype.toObject = function() {
|
||||
var object = {mode:this.mode};
|
||||
|
||||
if(this.tags != null) {
|
||||
object['tags'] = this.tags;
|
||||
}
|
||||
|
||||
return object;
|
||||
}
|
||||
|
||||
/**
|
||||
* @ignore
|
||||
*/
|
||||
ReadPreference.PRIMARY = 'primary';
|
||||
ReadPreference.PRIMARY_PREFERRED = 'primaryPreferred';
|
||||
ReadPreference.SECONDARY = 'secondary';
|
||||
ReadPreference.SECONDARY_PREFERRED = 'secondaryPreferred';
|
||||
ReadPreference.NEAREST = 'nearest'
|
||||
|
||||
/**
|
||||
* @ignore
|
||||
*/
|
||||
module.exports = ReadPreference;
|
||||
562
node_modules/mongodb/lib/replset.js
generated
vendored
Normal file
562
node_modules/mongodb/lib/replset.js
generated
vendored
Normal file
@@ -0,0 +1,562 @@
|
||||
"use strict";
|
||||
|
||||
var EventEmitter = require('events').EventEmitter
|
||||
, inherits = require('util').inherits
|
||||
, f = require('util').format
|
||||
, Server = require('./server')
|
||||
, Mongos = require('./mongos')
|
||||
, Cursor = require('./cursor')
|
||||
, AggregationCursor = require('./aggregation_cursor')
|
||||
, CommandCursor = require('./command_cursor')
|
||||
, ReadPreference = require('./read_preference')
|
||||
, MongoCR = require('mongodb-core').MongoCR
|
||||
, MongoError = require('mongodb-core').MongoError
|
||||
, ServerCapabilities = require('./topology_base').ServerCapabilities
|
||||
, Store = require('./topology_base').Store
|
||||
, Define = require('./metadata')
|
||||
, CServer = require('mongodb-core').Server
|
||||
, CReplSet = require('mongodb-core').ReplSet
|
||||
, CoreReadPreference = require('mongodb-core').ReadPreference
|
||||
, shallowClone = require('./utils').shallowClone;
|
||||
|
||||
/**
|
||||
* @fileOverview The **ReplSet** class is a class that represents a Replicaset topology and is
|
||||
* used to construct connections.
|
||||
*
|
||||
* **ReplSet Should not be used, use MongoClient.connect**
|
||||
* @example
|
||||
* var Db = require('mongodb').Db,
|
||||
* ReplSet = require('mongodb').ReplSet,
|
||||
* Server = require('mongodb').Server,
|
||||
* test = require('assert');
|
||||
* // Connect using ReplSet
|
||||
* var server = new Server('localhost', 27017);
|
||||
* var db = new Db('test', new ReplSet([server]));
|
||||
* db.open(function(err, db) {
|
||||
* // Get an additional db
|
||||
* db.close();
|
||||
* });
|
||||
*/
|
||||
|
||||
/**
|
||||
* Creates a new ReplSet instance
|
||||
* @class
|
||||
* @deprecated
|
||||
* @param {Server[]} servers A seedlist of servers participating in the replicaset.
|
||||
* @param {object} [options=null] Optional settings.
|
||||
* @param {booelan} [options.ha=true] Turn on high availability monitoring.
|
||||
* @param {number} [options.haInterval=5000] Time between each replicaset status check.
|
||||
* @param {string} options.replicaSet The name of the replicaset to connect to.
|
||||
* @param {number} [options.secondaryAcceptableLatencyMS=15] Sets the range of servers to pick when using NEAREST (lowest ping ms + the latency fence, ex: range of 1 to (1 + 15) ms)
|
||||
* @param {boolean} [options.connectWithNoPrimary=false] Sets if the driver should connect even if no primary is available
|
||||
* @param {number} [options.poolSize=5] Number of connections in the connection pool for each server instance, set to 5 as default for legacy reasons.
|
||||
* @param {boolean} [options.ssl=false] Use ssl connection (needs to have a mongod server with ssl support)
|
||||
* @param {boolean|function} [options.checkServerIdentity=true] Ensure we check server identify during SSL, set to false to disable checking. Only works for Node 0.12.x or higher. You can pass in a boolean or your own checkServerIdentity override function.
|
||||
* @param {object} [options.sslValidate=true] Validate mongod server certificate against ca (needs to have a mongod server with ssl support, 2.4 or higher)
|
||||
* @param {array} [options.sslCA=null] Array of valid certificates either as Buffers or Strings (needs to have a mongod server with ssl support, 2.4 or higher)
|
||||
* @param {(Buffer|string)} [options.sslCert=null] String or buffer containing the certificate we wish to present (needs to have a mongod server with ssl support, 2.4 or higher)
|
||||
* @param {(Buffer|string)} [options.sslKey=null] String or buffer containing the certificate private key we wish to present (needs to have a mongod server with ssl support, 2.4 or higher)
|
||||
* @param {(Buffer|string)} [options.sslPass=null] String or buffer containing the certificate password (needs to have a mongod server with ssl support, 2.4 or higher)
|
||||
* @param {object} [options.socketOptions=null] Socket options
|
||||
* @param {boolean} [options.socketOptions.noDelay=true] TCP Socket NoDelay option.
|
||||
* @param {number} [options.socketOptions.keepAlive=0] TCP KeepAlive on the socket with a X ms delay before start.
|
||||
* @param {number} [options.socketOptions.connectTimeoutMS=10000] TCP Connection timeout setting
|
||||
* @param {number} [options.socketOptions.socketTimeoutMS=0] TCP Socket timeout setting
|
||||
* @fires ReplSet#connect
|
||||
* @fires ReplSet#ha
|
||||
* @fires ReplSet#joined
|
||||
* @fires ReplSet#left
|
||||
* @fires ReplSet#fullsetup
|
||||
* @fires ReplSet#open
|
||||
* @fires ReplSet#close
|
||||
* @fires ReplSet#error
|
||||
* @fires ReplSet#timeout
|
||||
* @fires ReplSet#parseError
|
||||
* @return {ReplSet} a ReplSet instance.
|
||||
*/
|
||||
var ReplSet = function(servers, options) {
|
||||
if(!(this instanceof ReplSet)) return new ReplSet(servers, options);
|
||||
options = options || {};
|
||||
var self = this;
|
||||
|
||||
// Ensure all the instances are Server
|
||||
for(var i = 0; i < servers.length; i++) {
|
||||
if(!(servers[i] instanceof Server)) {
|
||||
throw MongoError.create({message: "all seed list instances must be of the Server type", driver:true});
|
||||
}
|
||||
}
|
||||
|
||||
// Store option defaults
|
||||
var storeOptions = {
|
||||
force: false
|
||||
, bufferMaxEntries: -1
|
||||
}
|
||||
|
||||
// Shared global store
|
||||
var store = options.store || new Store(self, storeOptions);
|
||||
|
||||
// Set up event emitter
|
||||
EventEmitter.call(this);
|
||||
|
||||
// Debug tag
|
||||
var tag = options.tag;
|
||||
|
||||
// Build seed list
|
||||
var seedlist = servers.map(function(x) {
|
||||
return {host: x.host, port: x.port}
|
||||
});
|
||||
|
||||
// Final options
|
||||
var finalOptions = shallowClone(options);
|
||||
|
||||
// Default values
|
||||
finalOptions.size = typeof options.poolSize == 'number' ? options.poolSize : 5;
|
||||
finalOptions.reconnect = typeof options.auto_reconnect == 'boolean' ? options.auto_reconnect : true;
|
||||
finalOptions.emitError = typeof options.emitError == 'boolean' ? options.emitError : true;
|
||||
finalOptions.cursorFactory = Cursor;
|
||||
|
||||
// Add the store
|
||||
finalOptions.disconnectHandler = store;
|
||||
|
||||
// Socket options passed down
|
||||
if(options.socketOptions) {
|
||||
if(options.socketOptions.connectTimeoutMS) {
|
||||
this.connectTimeoutMS = options.socketOptions.connectTimeoutMS;
|
||||
finalOptions.connectionTimeout = options.socketOptions.connectTimeoutMS;
|
||||
}
|
||||
|
||||
if(options.socketOptions.socketTimeoutMS) {
|
||||
finalOptions.socketTimeout = options.socketOptions.socketTimeoutMS;
|
||||
}
|
||||
}
|
||||
|
||||
// Get the name
|
||||
var replicaSet = options.replicaSet || options.rs_name;
|
||||
|
||||
// Set up options
|
||||
finalOptions.setName = replicaSet;
|
||||
|
||||
// Are we running in debug mode
|
||||
var debug = typeof options.debug == 'boolean' ? options.debug : false;
|
||||
if(debug) {
|
||||
finalOptions.debug = debug;
|
||||
}
|
||||
|
||||
// Map keep alive setting
|
||||
if(options.socketOptions && typeof options.socketOptions.keepAlive == 'number') {
|
||||
finalOptions.keepAlive = true;
|
||||
if(typeof options.socketOptions.keepAlive == 'number') {
|
||||
finalOptions.keepAliveInitialDelay = options.socketOptions.keepAlive;
|
||||
}
|
||||
}
|
||||
|
||||
// Connection timeout
|
||||
if(options.socketOptions && typeof options.socketOptions.connectionTimeout == 'number') {
|
||||
finalOptions.connectionTimeout = options.socketOptions.connectionTimeout;
|
||||
}
|
||||
|
||||
// Socket timeout
|
||||
if(options.socketOptions && typeof options.socketOptions.socketTimeout == 'number') {
|
||||
finalOptions.socketTimeout = options.socketOptions.socketTimeout;
|
||||
}
|
||||
|
||||
// noDelay
|
||||
if(options.socketOptions && typeof options.socketOptions.noDelay == 'boolean') {
|
||||
finalOptions.noDelay = options.socketOptions.noDelay;
|
||||
}
|
||||
|
||||
if(typeof options.secondaryAcceptableLatencyMS == 'number') {
|
||||
finalOptions.acceptableLatency = options.secondaryAcceptableLatencyMS;
|
||||
}
|
||||
|
||||
if(options.connectWithNoPrimary == true) {
|
||||
finalOptions.secondaryOnlyConnectionAllowed = true;
|
||||
}
|
||||
|
||||
// Add the non connection store
|
||||
finalOptions.disconnectHandler = store;
|
||||
|
||||
// Translate the options
|
||||
if(options.sslCA) finalOptions.ca = options.sslCA;
|
||||
if(typeof options.sslValidate == 'boolean') finalOptions.rejectUnauthorized = options.sslValidate;
|
||||
if(options.sslKey) finalOptions.key = options.sslKey;
|
||||
if(options.sslCert) finalOptions.cert = options.sslCert;
|
||||
if(options.sslPass) finalOptions.passphrase = options.sslPass;
|
||||
if(options.checkServerIdentity) finalOptions.checkServerIdentity = options.checkServerIdentity;
|
||||
|
||||
// Create the ReplSet
|
||||
var replset = new CReplSet(seedlist, finalOptions)
|
||||
// Server capabilities
|
||||
var sCapabilities = null;
|
||||
// Add auth prbufferMaxEntriesoviders
|
||||
replset.addAuthProvider('mongocr', new MongoCR());
|
||||
|
||||
// Listen to reconnect event
|
||||
replset.on('reconnect', function() {
|
||||
self.emit('reconnect');
|
||||
store.execute();
|
||||
});
|
||||
|
||||
// Internal state
|
||||
this.s = {
|
||||
// Replicaset
|
||||
replset: replset
|
||||
// Server capabilities
|
||||
, sCapabilities: null
|
||||
// Debug tag
|
||||
, tag: options.tag
|
||||
// Store options
|
||||
, storeOptions: storeOptions
|
||||
// Cloned options
|
||||
, clonedOptions: finalOptions
|
||||
// Store
|
||||
, store: store
|
||||
// Options
|
||||
, options: options
|
||||
}
|
||||
|
||||
// Debug
|
||||
if(debug) {
|
||||
// Last ismaster
|
||||
Object.defineProperty(this, 'replset', {
|
||||
enumerable:true, get: function() { return replset; }
|
||||
});
|
||||
}
|
||||
|
||||
// Last ismaster
|
||||
Object.defineProperty(this, 'isMasterDoc', {
|
||||
enumerable:true, get: function() { return replset.lastIsMaster(); }
|
||||
});
|
||||
|
||||
// BSON property
|
||||
Object.defineProperty(this, 'bson', {
|
||||
enumerable: true, get: function() {
|
||||
return replset.bson;
|
||||
}
|
||||
});
|
||||
|
||||
Object.defineProperty(this, 'haInterval', {
|
||||
enumerable:true, get: function() { return replset.haInterval; }
|
||||
});
|
||||
}
|
||||
|
||||
/**
|
||||
* @ignore
|
||||
*/
|
||||
inherits(ReplSet, EventEmitter);
|
||||
|
||||
var define = ReplSet.define = new Define('ReplSet', ReplSet, false);
|
||||
|
||||
// Ensure the right read Preference object
|
||||
var translateReadPreference = function(options) {
|
||||
if(typeof options.readPreference == 'string') {
|
||||
options.readPreference = new CoreReadPreference(options.readPreference);
|
||||
} else if(options.readPreference instanceof ReadPreference) {
|
||||
options.readPreference = new CoreReadPreference(options.readPreference.mode
|
||||
, options.readPreference.tags);
|
||||
}
|
||||
|
||||
return options;
|
||||
}
|
||||
|
||||
ReplSet.prototype.parserType = function() {
|
||||
return this.s.replset.parserType();
|
||||
}
|
||||
|
||||
define.classMethod('parserType', {callback: false, promise:false, returns: [String]});
|
||||
|
||||
// Connect method
|
||||
ReplSet.prototype.connect = function(db, _options, callback) {
|
||||
var self = this;
|
||||
if('function' === typeof _options) callback = _options, _options = {};
|
||||
if(_options == null) _options = {};
|
||||
if(!('function' === typeof callback)) callback = null;
|
||||
self.s.options = _options;
|
||||
|
||||
// Update bufferMaxEntries
|
||||
self.s.storeOptions.bufferMaxEntries = db.bufferMaxEntries;
|
||||
|
||||
// Actual handler
|
||||
var errorHandler = function(event) {
|
||||
return function(err) {
|
||||
if(event != 'error') {
|
||||
self.emit(event, err);
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// Connect handler
|
||||
var connectHandler = function() {
|
||||
// Clear out all the current handlers left over
|
||||
["timeout", "error", "close"].forEach(function(e) {
|
||||
self.s.replset.removeAllListeners(e);
|
||||
});
|
||||
|
||||
// Set up listeners
|
||||
self.s.replset.once('timeout', errorHandler('timeout'));
|
||||
self.s.replset.once('error', errorHandler('error'));
|
||||
self.s.replset.once('close', errorHandler('close'));
|
||||
|
||||
// relay the event
|
||||
var relay = function(event) {
|
||||
return function(t, server) {
|
||||
self.emit(event, t, server);
|
||||
}
|
||||
}
|
||||
|
||||
// Replset events relay
|
||||
var replsetRelay = function(event) {
|
||||
return function(t, server) {
|
||||
self.emit(event, t, server.lastIsMaster(), server);
|
||||
}
|
||||
}
|
||||
|
||||
// Relay ha
|
||||
var relayHa = function(t, state) {
|
||||
self.emit('ha', t, state);
|
||||
|
||||
if(t == 'start') {
|
||||
self.emit('ha_connect', t, state);
|
||||
} else if(t == 'end') {
|
||||
self.emit('ha_ismaster', t, state);
|
||||
}
|
||||
}
|
||||
|
||||
// Set up serverConfig listeners
|
||||
self.s.replset.on('joined', replsetRelay('joined'));
|
||||
self.s.replset.on('left', relay('left'));
|
||||
self.s.replset.on('ping', relay('ping'));
|
||||
self.s.replset.on('ha', relayHa);
|
||||
|
||||
self.s.replset.on('fullsetup', function(topology) {
|
||||
self.emit('fullsetup', null, self);
|
||||
});
|
||||
|
||||
self.s.replset.on('all', function(topology) {
|
||||
self.emit('all', null, self);
|
||||
});
|
||||
|
||||
// Emit open event
|
||||
self.emit('open', null, self);
|
||||
|
||||
// Return correctly
|
||||
try {
|
||||
callback(null, self);
|
||||
} catch(err) {
|
||||
process.nextTick(function() { throw err; })
|
||||
}
|
||||
}
|
||||
|
||||
// Error handler
|
||||
var connectErrorHandler = function(event) {
|
||||
return function(err) {
|
||||
['timeout', 'error', 'close'].forEach(function(e) {
|
||||
self.s.replset.removeListener(e, connectErrorHandler);
|
||||
});
|
||||
|
||||
self.s.replset.removeListener('connect', connectErrorHandler);
|
||||
// Destroy the replset
|
||||
self.s.replset.destroy();
|
||||
|
||||
// Try to callback
|
||||
try {
|
||||
callback(err);
|
||||
} catch(err) {
|
||||
if(!self.s.replset.isConnected())
|
||||
process.nextTick(function() { throw err; })
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// Set up listeners
|
||||
self.s.replset.once('timeout', connectErrorHandler('timeout'));
|
||||
self.s.replset.once('error', connectErrorHandler('error'));
|
||||
self.s.replset.once('close', connectErrorHandler('close'));
|
||||
self.s.replset.once('connect', connectHandler);
|
||||
|
||||
// Start connection
|
||||
self.s.replset.connect(_options);
|
||||
}
|
||||
|
||||
// Server capabilities
|
||||
ReplSet.prototype.capabilities = function() {
|
||||
if(this.s.sCapabilities) return this.s.sCapabilities;
|
||||
if(this.s.replset.lastIsMaster() == null) return null;
|
||||
this.s.sCapabilities = new ServerCapabilities(this.s.replset.lastIsMaster());
|
||||
return this.s.sCapabilities;
|
||||
}
|
||||
|
||||
define.classMethod('capabilities', {callback: false, promise:false, returns: [ServerCapabilities]});
|
||||
|
||||
// Command
|
||||
ReplSet.prototype.command = function(ns, cmd, options, callback) {
|
||||
options = translateReadPreference(options);
|
||||
this.s.replset.command(ns, cmd, options, callback);
|
||||
}
|
||||
|
||||
define.classMethod('command', {callback: true, promise:false});
|
||||
|
||||
// Insert
|
||||
ReplSet.prototype.insert = function(ns, ops, options, callback) {
|
||||
this.s.replset.insert(ns, ops, options, callback);
|
||||
}
|
||||
|
||||
define.classMethod('insert', {callback: true, promise:false});
|
||||
|
||||
// Update
|
||||
ReplSet.prototype.update = function(ns, ops, options, callback) {
|
||||
this.s.replset.update(ns, ops, options, callback);
|
||||
}
|
||||
|
||||
define.classMethod('update', {callback: true, promise:false});
|
||||
|
||||
// Remove
|
||||
ReplSet.prototype.remove = function(ns, ops, options, callback) {
|
||||
this.s.replset.remove(ns, ops, options, callback);
|
||||
}
|
||||
|
||||
define.classMethod('remove', {callback: true, promise:false});
|
||||
|
||||
// Destroyed
|
||||
ReplSet.prototype.isDestroyed = function() {
|
||||
return this.s.replset.isDestroyed();
|
||||
}
|
||||
|
||||
// IsConnected
|
||||
ReplSet.prototype.isConnected = function() {
|
||||
return this.s.replset.isConnected();
|
||||
}
|
||||
|
||||
define.classMethod('isConnected', {callback: false, promise:false, returns: [Boolean]});
|
||||
|
||||
ReplSet.prototype.setBSONParserType = function(type) {
|
||||
return this.s.replset.setBSONParserType(type);
|
||||
}
|
||||
|
||||
// Insert
|
||||
ReplSet.prototype.cursor = function(ns, cmd, options) {
|
||||
options = translateReadPreference(options);
|
||||
options.disconnectHandler = this.s.store;
|
||||
return this.s.replset.cursor(ns, cmd, options);
|
||||
}
|
||||
|
||||
define.classMethod('cursor', {callback: false, promise:false, returns: [Cursor, AggregationCursor, CommandCursor]});
|
||||
|
||||
ReplSet.prototype.lastIsMaster = function() {
|
||||
return this.s.replset.lastIsMaster();
|
||||
}
|
||||
|
||||
ReplSet.prototype.close = function(forceClosed) {
|
||||
var self = this;
|
||||
this.s.replset.destroy();
|
||||
// We need to wash out all stored processes
|
||||
if(forceClosed == true) {
|
||||
this.s.storeOptions.force = forceClosed;
|
||||
this.s.store.flush();
|
||||
}
|
||||
|
||||
var events = ['timeout', 'error', 'close', 'joined', 'left'];
|
||||
events.forEach(function(e) {
|
||||
self.removeAllListeners(e);
|
||||
});
|
||||
}
|
||||
|
||||
define.classMethod('close', {callback: false, promise:false});
|
||||
|
||||
ReplSet.prototype.auth = function() {
|
||||
var args = Array.prototype.slice.call(arguments, 0);
|
||||
this.s.replset.auth.apply(this.s.replset, args);
|
||||
}
|
||||
|
||||
define.classMethod('auth', {callback: true, promise:false});
|
||||
|
||||
/**
|
||||
* All raw connections
|
||||
* @method
|
||||
* @return {array}
|
||||
*/
|
||||
ReplSet.prototype.connections = function() {
|
||||
return this.s.replset.connections();
|
||||
}
|
||||
|
||||
define.classMethod('connections', {callback: false, promise:false, returns:[Array]});
|
||||
|
||||
/**
|
||||
* A replset connect event, used to verify that the connection is up and running
|
||||
*
|
||||
* @event ReplSet#connect
|
||||
* @type {ReplSet}
|
||||
*/
|
||||
|
||||
/**
|
||||
* The replset high availability event
|
||||
*
|
||||
* @event ReplSet#ha
|
||||
* @type {function}
|
||||
* @param {string} type The stage in the high availability event (start|end)
|
||||
* @param {boolean} data.norepeat This is a repeating high availability process or a single execution only
|
||||
* @param {number} data.id The id for this high availability request
|
||||
* @param {object} data.state An object containing the information about the current replicaset
|
||||
*/
|
||||
|
||||
/**
|
||||
* A server member left the replicaset
|
||||
*
|
||||
* @event ReplSet#left
|
||||
* @type {function}
|
||||
* @param {string} type The type of member that left (primary|secondary|arbiter)
|
||||
* @param {Server} server The server object that left
|
||||
*/
|
||||
|
||||
/**
|
||||
* A server member joined the replicaset
|
||||
*
|
||||
* @event ReplSet#joined
|
||||
* @type {function}
|
||||
* @param {string} type The type of member that joined (primary|secondary|arbiter)
|
||||
* @param {Server} server The server object that joined
|
||||
*/
|
||||
|
||||
/**
|
||||
* ReplSet open event, emitted when replicaset can start processing commands.
|
||||
*
|
||||
* @event ReplSet#open
|
||||
* @type {Replset}
|
||||
*/
|
||||
|
||||
/**
|
||||
* ReplSet fullsetup event, emitted when all servers in the topology have been connected to.
|
||||
*
|
||||
* @event ReplSet#fullsetup
|
||||
* @type {Replset}
|
||||
*/
|
||||
|
||||
/**
|
||||
* ReplSet close event
|
||||
*
|
||||
* @event ReplSet#close
|
||||
* @type {object}
|
||||
*/
|
||||
|
||||
/**
|
||||
* ReplSet error event, emitted if there is an error listener.
|
||||
*
|
||||
* @event ReplSet#error
|
||||
* @type {MongoError}
|
||||
*/
|
||||
|
||||
/**
|
||||
* ReplSet timeout event
|
||||
*
|
||||
* @event ReplSet#timeout
|
||||
* @type {object}
|
||||
*/
|
||||
|
||||
/**
|
||||
* ReplSet parseError event
|
||||
*
|
||||
* @event ReplSet#parseError
|
||||
* @type {object}
|
||||
*/
|
||||
|
||||
module.exports = ReplSet;
|
||||
442
node_modules/mongodb/lib/server.js
generated
vendored
Normal file
442
node_modules/mongodb/lib/server.js
generated
vendored
Normal file
@@ -0,0 +1,442 @@
|
||||
"use strict";
|
||||
|
||||
var EventEmitter = require('events').EventEmitter
|
||||
, inherits = require('util').inherits
|
||||
, CServer = require('mongodb-core').Server
|
||||
, Cursor = require('./cursor')
|
||||
, AggregationCursor = require('./aggregation_cursor')
|
||||
, CommandCursor = require('./command_cursor')
|
||||
, f = require('util').format
|
||||
, ServerCapabilities = require('./topology_base').ServerCapabilities
|
||||
, Store = require('./topology_base').Store
|
||||
, Define = require('./metadata')
|
||||
, MongoError = require('mongodb-core').MongoError
|
||||
, shallowClone = require('./utils').shallowClone;
|
||||
|
||||
/**
|
||||
* @fileOverview The **Server** class is a class that represents a single server topology and is
|
||||
* used to construct connections.
|
||||
*
|
||||
* **Server Should not be used, use MongoClient.connect**
|
||||
* @example
|
||||
* var Db = require('mongodb').Db,
|
||||
* Server = require('mongodb').Server,
|
||||
* test = require('assert');
|
||||
* // Connect using single Server
|
||||
* var db = new Db('test', new Server('localhost', 27017););
|
||||
* db.open(function(err, db) {
|
||||
* // Get an additional db
|
||||
* db.close();
|
||||
* });
|
||||
*/
|
||||
|
||||
/**
|
||||
* Creates a new Server instance
|
||||
* @class
|
||||
* @deprecated
|
||||
* @param {string} host The host for the server, can be either an IP4, IP6 or domain socket style host.
|
||||
* @param {number} [port] The server port if IP4.
|
||||
* @param {object} [options=null] Optional settings.
|
||||
* @param {number} [options.poolSize=5] Number of connections in the connection pool for each server instance, set to 5 as default for legacy reasons.
|
||||
* @param {boolean} [options.ssl=false] Use ssl connection (needs to have a mongod server with ssl support)
|
||||
* @param {object} [options.sslValidate=true] Validate mongod server certificate against ca (needs to have a mongod server with ssl support, 2.4 or higher)
|
||||
* @param {boolean|function} [options.checkServerIdentity=true] Ensure we check server identify during SSL, set to false to disable checking. Only works for Node 0.12.x or higher. You can pass in a boolean or your own checkServerIdentity override function.
|
||||
* @param {array} [options.sslCA=null] Array of valid certificates either as Buffers or Strings (needs to have a mongod server with ssl support, 2.4 or higher)
|
||||
* @param {(Buffer|string)} [options.sslCert=null] String or buffer containing the certificate we wish to present (needs to have a mongod server with ssl support, 2.4 or higher)
|
||||
* @param {(Buffer|string)} [options.sslKey=null] String or buffer containing the certificate private key we wish to present (needs to have a mongod server with ssl support, 2.4 or higher)
|
||||
* @param {(Buffer|string)} [options.sslPass=null] String or buffer containing the certificate password (needs to have a mongod server with ssl support, 2.4 or higher)
|
||||
* @param {object} [options.socketOptions=null] Socket options
|
||||
* @param {boolean} [options.socketOptions.autoReconnect=false] Reconnect on error.
|
||||
* @param {boolean} [options.socketOptions.noDelay=true] TCP Socket NoDelay option.
|
||||
* @param {number} [options.socketOptions.keepAlive=0] TCP KeepAlive on the socket with a X ms delay before start.
|
||||
* @param {number} [options.socketOptions.connectTimeoutMS=0] TCP Connection timeout setting
|
||||
* @param {number} [options.socketOptions.socketTimeoutMS=0] TCP Socket timeout setting
|
||||
* @param {number} [options.reconnectTries=30] Server attempt to reconnect #times
|
||||
* @param {number} [options.reconnectInterval=1000] Server will wait # milliseconds between retries
|
||||
* @fires Server#connect
|
||||
* @fires Server#close
|
||||
* @fires Server#error
|
||||
* @fires Server#timeout
|
||||
* @fires Server#parseError
|
||||
* @fires Server#reconnect
|
||||
* @return {Server} a Server instance.
|
||||
*/
|
||||
var Server = function(host, port, options) {
|
||||
options = options || {};
|
||||
if(!(this instanceof Server)) return new Server(host, port, options);
|
||||
EventEmitter.call(this);
|
||||
var self = this;
|
||||
|
||||
// Store option defaults
|
||||
var storeOptions = {
|
||||
force: false
|
||||
, bufferMaxEntries: -1
|
||||
}
|
||||
|
||||
// Shared global store
|
||||
var store = options.store || new Store(self, storeOptions);
|
||||
|
||||
// Detect if we have a socket connection
|
||||
if(host.indexOf('\/') != -1) {
|
||||
if(port != null && typeof port == 'object') {
|
||||
options = port;
|
||||
port = null;
|
||||
}
|
||||
} else if(port == null) {
|
||||
throw MongoError.create({message: 'port must be specified', driver:true});
|
||||
}
|
||||
|
||||
// Clone options
|
||||
var clonedOptions = shallowClone(options);
|
||||
clonedOptions.host = host;
|
||||
clonedOptions.port = port;
|
||||
|
||||
// Reconnect
|
||||
var reconnect = typeof options.auto_reconnect == 'boolean' ? options.auto_reconnect : true;
|
||||
reconnect = typeof options.autoReconnect == 'boolean' ? options.autoReconnect : reconnect;
|
||||
var emitError = typeof options.emitError == 'boolean' ? options.emitError : true;
|
||||
var poolSize = typeof options.poolSize == 'number' ? options.poolSize : 5;
|
||||
|
||||
// Socket options passed down
|
||||
if(options.socketOptions) {
|
||||
if(options.socketOptions.connectTimeoutMS) {
|
||||
this.connectTimeoutMS = options.socketOptions.connectTimeoutMS;
|
||||
clonedOptions.connectionTimeout = options.socketOptions.connectTimeoutMS;
|
||||
}
|
||||
|
||||
if(options.socketOptions.socketTimeoutMS) {
|
||||
clonedOptions.socketTimeout = options.socketOptions.socketTimeoutMS;
|
||||
}
|
||||
|
||||
if(typeof options.socketOptions.keepAlive == 'number') {
|
||||
clonedOptions.keepAliveInitialDelay = options.socketOptions.keepAlive;
|
||||
clonedOptions.keepAlive = true;
|
||||
}
|
||||
|
||||
if(typeof options.socketOptions.noDelay == 'boolean') {
|
||||
clonedOptions.noDelay = options.socketOptions.noDelay;
|
||||
}
|
||||
}
|
||||
|
||||
// Add the cursor factory function
|
||||
clonedOptions.cursorFactory = Cursor;
|
||||
clonedOptions.reconnect = reconnect;
|
||||
clonedOptions.emitError = emitError;
|
||||
clonedOptions.size = poolSize;
|
||||
|
||||
// Translate the options
|
||||
if(clonedOptions.sslCA) clonedOptions.ca = clonedOptions.sslCA;
|
||||
if(typeof clonedOptions.sslValidate == 'boolean') clonedOptions.rejectUnauthorized = clonedOptions.sslValidate;
|
||||
if(clonedOptions.sslKey) clonedOptions.key = clonedOptions.sslKey;
|
||||
if(clonedOptions.sslCert) clonedOptions.cert = clonedOptions.sslCert;
|
||||
if(clonedOptions.sslPass) clonedOptions.passphrase = clonedOptions.sslPass;
|
||||
|
||||
// Add the non connection store
|
||||
clonedOptions.disconnectHandler = store;
|
||||
|
||||
// Create an instance of a server instance from mongodb-core
|
||||
var server = new CServer(clonedOptions);
|
||||
// Server capabilities
|
||||
var sCapabilities = null;
|
||||
|
||||
// Define the internal properties
|
||||
this.s = {
|
||||
// Create an instance of a server instance from mongodb-core
|
||||
server: server
|
||||
// Server capabilities
|
||||
, sCapabilities: null
|
||||
// Cloned options
|
||||
, clonedOptions: clonedOptions
|
||||
// Reconnect
|
||||
, reconnect: reconnect
|
||||
// Emit error
|
||||
, emitError: emitError
|
||||
// Pool size
|
||||
, poolSize: poolSize
|
||||
// Store Options
|
||||
, storeOptions: storeOptions
|
||||
// Store
|
||||
, store: store
|
||||
// Host
|
||||
, host: host
|
||||
// Port
|
||||
, port: port
|
||||
// Options
|
||||
, options: options
|
||||
}
|
||||
|
||||
// BSON property
|
||||
Object.defineProperty(this, 'bson', {
|
||||
enumerable: true, get: function() {
|
||||
return self.s.server.bson;
|
||||
}
|
||||
});
|
||||
|
||||
// Last ismaster
|
||||
Object.defineProperty(this, 'isMasterDoc', {
|
||||
enumerable:true, get: function() {
|
||||
return self.s.server.lastIsMaster();
|
||||
}
|
||||
});
|
||||
|
||||
// Last ismaster
|
||||
Object.defineProperty(this, 'poolSize', {
|
||||
enumerable:true, get: function() { return self.s.server.connections().length; }
|
||||
});
|
||||
|
||||
Object.defineProperty(this, 'autoReconnect', {
|
||||
enumerable:true, get: function() { return self.s.reconnect; }
|
||||
});
|
||||
|
||||
Object.defineProperty(this, 'host', {
|
||||
enumerable:true, get: function() { return self.s.host; }
|
||||
});
|
||||
|
||||
Object.defineProperty(this, 'port', {
|
||||
enumerable:true, get: function() { return self.s.port; }
|
||||
});
|
||||
}
|
||||
|
||||
inherits(Server, EventEmitter);
|
||||
|
||||
var define = Server.define = new Define('Server', Server, false);
|
||||
|
||||
Server.prototype.parserType = function() {
|
||||
return this.s.server.parserType();
|
||||
}
|
||||
|
||||
define.classMethod('parserType', {callback: false, promise:false, returns: [String]});
|
||||
|
||||
// Connect
|
||||
Server.prototype.connect = function(db, _options, callback) {
|
||||
var self = this;
|
||||
if('function' === typeof _options) callback = _options, _options = {};
|
||||
if(_options == null) _options = {};
|
||||
if(!('function' === typeof callback)) callback = null;
|
||||
self.s.options = _options;
|
||||
|
||||
// Update bufferMaxEntries
|
||||
self.s.storeOptions.bufferMaxEntries = db.bufferMaxEntries;
|
||||
|
||||
// Error handler
|
||||
var connectErrorHandler = function(event) {
|
||||
return function(err) {
|
||||
// Remove all event handlers
|
||||
var events = ['timeout', 'error', 'close'];
|
||||
events.forEach(function(e) {
|
||||
self.s.server.removeListener(e, connectHandlers[e]);
|
||||
});
|
||||
|
||||
self.s.server.removeListener('connect', connectErrorHandler);
|
||||
|
||||
// Try to callback
|
||||
try {
|
||||
callback(err);
|
||||
} catch(err) {
|
||||
process.nextTick(function() { throw err; })
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// Actual handler
|
||||
var errorHandler = function(event) {
|
||||
return function(err) {
|
||||
if(event != 'error') {
|
||||
self.emit(event, err);
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// Error handler
|
||||
var reconnectHandler = function(err) {
|
||||
self.emit('reconnect', self);
|
||||
self.s.store.execute();
|
||||
}
|
||||
|
||||
// Destroy called on topology, perform cleanup
|
||||
var destroyHandler = function() {
|
||||
self.s.store.flush();
|
||||
}
|
||||
|
||||
// Connect handler
|
||||
var connectHandler = function() {
|
||||
// Clear out all the current handlers left over
|
||||
["timeout", "error", "close"].forEach(function(e) {
|
||||
self.s.server.removeAllListeners(e);
|
||||
});
|
||||
|
||||
// Set up listeners
|
||||
self.s.server.once('timeout', errorHandler('timeout'));
|
||||
self.s.server.once('error', errorHandler('error'));
|
||||
self.s.server.on('close', errorHandler('close'));
|
||||
// Only called on destroy
|
||||
self.s.server.once('destroy', destroyHandler);
|
||||
|
||||
// Emit open event
|
||||
self.emit('open', null, self);
|
||||
|
||||
// Return correctly
|
||||
try {
|
||||
callback(null, self);
|
||||
} catch(err) {
|
||||
console.log(err.stack)
|
||||
process.nextTick(function() { throw err; })
|
||||
}
|
||||
}
|
||||
|
||||
// Set up listeners
|
||||
var connectHandlers = {
|
||||
timeout: connectErrorHandler('timeout'),
|
||||
error: connectErrorHandler('error'),
|
||||
close: connectErrorHandler('close')
|
||||
};
|
||||
|
||||
// Add the event handlers
|
||||
self.s.server.once('timeout', connectHandlers.timeout);
|
||||
self.s.server.once('error', connectHandlers.error);
|
||||
self.s.server.once('close', connectHandlers.close);
|
||||
self.s.server.once('connect', connectHandler);
|
||||
// Reconnect server
|
||||
self.s.server.on('reconnect', reconnectHandler);
|
||||
|
||||
// Start connection
|
||||
self.s.server.connect(_options);
|
||||
}
|
||||
|
||||
// Server capabilities
|
||||
Server.prototype.capabilities = function() {
|
||||
if(this.s.sCapabilities) return this.s.sCapabilities;
|
||||
if(this.s.server.lastIsMaster() == null) return null;
|
||||
this.s.sCapabilities = new ServerCapabilities(this.s.server.lastIsMaster());
|
||||
return this.s.sCapabilities;
|
||||
}
|
||||
|
||||
define.classMethod('capabilities', {callback: false, promise:false, returns: [ServerCapabilities]});
|
||||
|
||||
// Command
|
||||
Server.prototype.command = function(ns, cmd, options, callback) {
|
||||
this.s.server.command(ns, cmd, options, callback);
|
||||
}
|
||||
|
||||
define.classMethod('command', {callback: true, promise:false});
|
||||
|
||||
// Insert
|
||||
Server.prototype.insert = function(ns, ops, options, callback) {
|
||||
this.s.server.insert(ns, ops, options, callback);
|
||||
}
|
||||
|
||||
define.classMethod('insert', {callback: true, promise:false});
|
||||
|
||||
// Update
|
||||
Server.prototype.update = function(ns, ops, options, callback) {
|
||||
this.s.server.update(ns, ops, options, callback);
|
||||
}
|
||||
|
||||
define.classMethod('update', {callback: true, promise:false});
|
||||
|
||||
// Remove
|
||||
Server.prototype.remove = function(ns, ops, options, callback) {
|
||||
this.s.server.remove(ns, ops, options, callback);
|
||||
}
|
||||
|
||||
define.classMethod('remove', {callback: true, promise:false});
|
||||
|
||||
// IsConnected
|
||||
Server.prototype.isConnected = function() {
|
||||
return this.s.server.isConnected();
|
||||
}
|
||||
|
||||
Server.prototype.isDestroyed = function() {
|
||||
return this.s.server.isDestroyed();
|
||||
}
|
||||
|
||||
define.classMethod('isConnected', {callback: false, promise:false, returns: [Boolean]});
|
||||
|
||||
// Insert
|
||||
Server.prototype.cursor = function(ns, cmd, options) {
|
||||
options.disconnectHandler = this.s.store;
|
||||
return this.s.server.cursor(ns, cmd, options);
|
||||
}
|
||||
|
||||
define.classMethod('cursor', {callback: false, promise:false, returns: [Cursor, AggregationCursor, CommandCursor]});
|
||||
|
||||
Server.prototype.setBSONParserType = function(type) {
|
||||
return this.s.server.setBSONParserType(type);
|
||||
}
|
||||
|
||||
Server.prototype.lastIsMaster = function() {
|
||||
return this.s.server.lastIsMaster();
|
||||
}
|
||||
|
||||
Server.prototype.close = function(forceClosed) {
|
||||
this.s.server.destroy();
|
||||
// We need to wash out all stored processes
|
||||
if(forceClosed == true) {
|
||||
this.s.storeOptions.force = forceClosed;
|
||||
this.s.store.flush();
|
||||
}
|
||||
}
|
||||
|
||||
define.classMethod('close', {callback: false, promise:false});
|
||||
|
||||
Server.prototype.auth = function() {
|
||||
var args = Array.prototype.slice.call(arguments, 0);
|
||||
this.s.server.auth.apply(this.s.server, args);
|
||||
}
|
||||
|
||||
define.classMethod('auth', {callback: true, promise:false});
|
||||
|
||||
/**
|
||||
* All raw connections
|
||||
* @method
|
||||
* @return {array}
|
||||
*/
|
||||
Server.prototype.connections = function() {
|
||||
return this.s.server.connections();
|
||||
}
|
||||
|
||||
define.classMethod('connections', {callback: false, promise:false, returns:[Array]});
|
||||
|
||||
/**
|
||||
* Server connect event
|
||||
*
|
||||
* @event Server#connect
|
||||
* @type {object}
|
||||
*/
|
||||
|
||||
/**
|
||||
* Server close event
|
||||
*
|
||||
* @event Server#close
|
||||
* @type {object}
|
||||
*/
|
||||
|
||||
/**
|
||||
* Server reconnect event
|
||||
*
|
||||
* @event Server#reconnect
|
||||
* @type {object}
|
||||
*/
|
||||
|
||||
/**
|
||||
* Server error event
|
||||
*
|
||||
* @event Server#error
|
||||
* @type {MongoError}
|
||||
*/
|
||||
|
||||
/**
|
||||
* Server timeout event
|
||||
*
|
||||
* @event Server#timeout
|
||||
* @type {object}
|
||||
*/
|
||||
|
||||
/**
|
||||
* Server parseError event
|
||||
*
|
||||
* @event Server#parseError
|
||||
* @type {object}
|
||||
*/
|
||||
|
||||
module.exports = Server;
|
||||
152
node_modules/mongodb/lib/topology_base.js
generated
vendored
Normal file
152
node_modules/mongodb/lib/topology_base.js
generated
vendored
Normal file
@@ -0,0 +1,152 @@
|
||||
"use strict";
|
||||
|
||||
var MongoError = require('mongodb-core').MongoError
|
||||
, f = require('util').format;
|
||||
|
||||
// The store of ops
|
||||
var Store = function(topology, storeOptions) {
|
||||
var self = this;
|
||||
var storedOps = [];
|
||||
storeOptions = storeOptions || {force:false, bufferMaxEntries: -1}
|
||||
|
||||
// Internal state
|
||||
this.s = {
|
||||
storedOps: storedOps
|
||||
, storeOptions: storeOptions
|
||||
, topology: topology
|
||||
}
|
||||
|
||||
Object.defineProperty(this, 'length', {
|
||||
enumerable:true, get: function() { return self.s.storedOps.length; }
|
||||
});
|
||||
}
|
||||
|
||||
Store.prototype.add = function(opType, ns, ops, options, callback) {
|
||||
if(this.s.storeOptions.force) {
|
||||
return callback(MongoError.create({message: "db closed by application", driver:true}));
|
||||
}
|
||||
|
||||
if(this.s.storeOptions.bufferMaxEntries == 0) {
|
||||
return callback(MongoError.create({message: f("no connection available for operation and number of stored operation > %s", this.s.storeOptions.bufferMaxEntries), driver:true }));
|
||||
}
|
||||
|
||||
if(this.s.storeOptions.bufferMaxEntries > 0 && this.s.storedOps.length > this.s.storeOptions.bufferMaxEntries) {
|
||||
while(this.s.storedOps.length > 0) {
|
||||
var op = this.s.storedOps.shift();
|
||||
op.c(MongoError.create({message: f("no connection available for operation and number of stored operation > %s", this.s.storeOptions.bufferMaxEntries), driver:true }));
|
||||
}
|
||||
|
||||
return;
|
||||
}
|
||||
|
||||
this.s.storedOps.push({t: opType, n: ns, o: ops, op: options, c: callback})
|
||||
}
|
||||
|
||||
Store.prototype.addObjectAndMethod = function(opType, object, method, params, callback) {
|
||||
if(this.s.storeOptions.force) {
|
||||
return callback(MongoError.create({message: "db closed by application", driver:true }));
|
||||
}
|
||||
|
||||
if(this.s.storeOptions.bufferMaxEntries == 0) {
|
||||
return callback(MongoError.create({message: f("no connection available for operation and number of stored operation > %s", this.s.storeOptions.bufferMaxEntries), driver:true }));
|
||||
}
|
||||
|
||||
if(this.s.storeOptions.bufferMaxEntries > 0 && this.s.storedOps.length > this.s.storeOptions.bufferMaxEntries) {
|
||||
while(this.s.storedOps.length > 0) {
|
||||
var op = this.s.storedOps.shift();
|
||||
op.c(MongoError.create({message: f("no connection available for operation and number of stored operation > %s", this.s.storeOptions.bufferMaxEntries), driver:true }));
|
||||
}
|
||||
|
||||
return;
|
||||
}
|
||||
|
||||
this.s.storedOps.push({t: opType, m: method, o: object, p: params, c: callback})
|
||||
}
|
||||
|
||||
Store.prototype.flush = function() {
|
||||
while(this.s.storedOps.length > 0) {
|
||||
this.s.storedOps.shift().c(MongoError.create({message: f("no connection available for operation"), driver:true }));
|
||||
}
|
||||
}
|
||||
|
||||
Store.prototype.execute = function() {
|
||||
// Get current ops
|
||||
var ops = this.s.storedOps;
|
||||
// Reset the ops
|
||||
this.s.storedOps = [];
|
||||
|
||||
// Execute all the stored ops
|
||||
while(ops.length > 0) {
|
||||
var op = ops.shift();
|
||||
|
||||
if(op.t == 'cursor') {
|
||||
op.o[op.m].apply(op.o, op.p);
|
||||
} else {
|
||||
this.s.topology[op.t](op.n, op.o, op.op, op.c);
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
Store.prototype.all = function() {
|
||||
return this.s.storedOps;
|
||||
}
|
||||
|
||||
// Server capabilities
|
||||
var ServerCapabilities = function(ismaster) {
|
||||
var setup_get_property = function(object, name, value) {
|
||||
Object.defineProperty(object, name, {
|
||||
enumerable: true
|
||||
, get: function () { return value; }
|
||||
});
|
||||
}
|
||||
|
||||
// Capabilities
|
||||
var aggregationCursor = false;
|
||||
var writeCommands = false;
|
||||
var textSearch = false;
|
||||
var authCommands = false;
|
||||
var listCollections = false;
|
||||
var listIndexes = false;
|
||||
var maxNumberOfDocsInBatch = ismaster.maxWriteBatchSize || 1000;
|
||||
|
||||
if(ismaster.minWireVersion >= 0) {
|
||||
textSearch = true;
|
||||
}
|
||||
|
||||
if(ismaster.maxWireVersion >= 1) {
|
||||
aggregationCursor = true;
|
||||
authCommands = true;
|
||||
}
|
||||
|
||||
if(ismaster.maxWireVersion >= 2) {
|
||||
writeCommands = true;
|
||||
}
|
||||
|
||||
if(ismaster.maxWireVersion >= 3) {
|
||||
listCollections = true;
|
||||
listIndexes = true;
|
||||
}
|
||||
|
||||
// If no min or max wire version set to 0
|
||||
if(ismaster.minWireVersion == null) {
|
||||
ismaster.minWireVersion = 0;
|
||||
}
|
||||
|
||||
if(ismaster.maxWireVersion == null) {
|
||||
ismaster.maxWireVersion = 0;
|
||||
}
|
||||
|
||||
// Map up read only parameters
|
||||
setup_get_property(this, "hasAggregationCursor", aggregationCursor);
|
||||
setup_get_property(this, "hasWriteCommands", writeCommands);
|
||||
setup_get_property(this, "hasTextSearch", textSearch);
|
||||
setup_get_property(this, "hasAuthCommands", authCommands);
|
||||
setup_get_property(this, "hasListCollectionsCommand", listCollections);
|
||||
setup_get_property(this, "hasListIndexesCommand", listIndexes);
|
||||
setup_get_property(this, "minWireVersion", ismaster.minWireVersion);
|
||||
setup_get_property(this, "maxWireVersion", ismaster.maxWireVersion);
|
||||
setup_get_property(this, "maxNumberOfDocsInBatch", maxNumberOfDocsInBatch);
|
||||
}
|
||||
|
||||
exports.Store = Store;
|
||||
exports.ServerCapabilities = ServerCapabilities;
|
||||
379
node_modules/mongodb/lib/url_parser.js
generated
vendored
Normal file
379
node_modules/mongodb/lib/url_parser.js
generated
vendored
Normal file
@@ -0,0 +1,379 @@
|
||||
"use strict";
|
||||
|
||||
var ReadPreference = require('./read_preference'),
|
||||
parser = require('url'),
|
||||
f = require('util').format;
|
||||
|
||||
module.exports = function(url, options) {
|
||||
// Ensure we have a default options object if none set
|
||||
options = options || {};
|
||||
// Variables
|
||||
var connection_part = '';
|
||||
var auth_part = '';
|
||||
var query_string_part = '';
|
||||
var dbName = 'admin';
|
||||
|
||||
// Url parser result
|
||||
var result = parser.parse(url, true);
|
||||
|
||||
if(result.protocol != 'mongodb:') {
|
||||
throw new Error('invalid schema, expected mongodb');
|
||||
}
|
||||
|
||||
if((result.hostname == null || result.hostname == '') && url.indexOf('.sock') == -1) {
|
||||
throw new Error('no hostname or hostnames provided in connection string');
|
||||
}
|
||||
|
||||
if(result.port == '0') {
|
||||
throw new Error('invalid port (zero) with hostname');
|
||||
}
|
||||
|
||||
if(!isNaN(parseInt(result.port, 10)) && parseInt(result.port, 10) > 65535) {
|
||||
throw new Error('invalid port (larger than 65535) with hostname');
|
||||
}
|
||||
|
||||
if(result.path
|
||||
&& result.path.length > 0
|
||||
&& result.path[0] != '/'
|
||||
&& url.indexOf('.sock') == -1) {
|
||||
throw new Error('missing delimiting slash between hosts and options');
|
||||
}
|
||||
|
||||
if(result.query) {
|
||||
for(var name in result.query) {
|
||||
if(name.indexOf(':') != -1) {
|
||||
throw new Error('double colon in host identifier');
|
||||
}
|
||||
|
||||
if(result.query[name] == '') {
|
||||
throw new Error('query parameter ' + name + ' is an incomplete value pair');
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
if(result.auth) {
|
||||
var parts = result.auth.split(':');
|
||||
if(url.indexOf(result.auth) != -1 && parts.length > 2) {
|
||||
throw new Error('Username with password containing an unescaped colon');
|
||||
}
|
||||
|
||||
if(url.indexOf(result.auth) != -1 && result.auth.indexOf('@') != -1) {
|
||||
throw new Error('Username containing an unescaped at-sign');
|
||||
}
|
||||
}
|
||||
|
||||
// Remove query
|
||||
var clean = url.split('?').shift();
|
||||
|
||||
// Extract the list of hosts
|
||||
var strings = clean.split(',');
|
||||
var hosts = [];
|
||||
|
||||
for(var i = 0; i < strings.length; i++) {
|
||||
var hostString = strings[i];
|
||||
|
||||
if(hostString.indexOf('mongodb') != -1) {
|
||||
if(hostString.indexOf('@') != -1) {
|
||||
hosts.push(hostString.split('@').pop())
|
||||
} else {
|
||||
hosts.push(hostString.substr('mongodb://'.length));
|
||||
}
|
||||
} else if(hostString.indexOf('/') != -1) {
|
||||
hosts.push(hostString.split('/').shift());
|
||||
} else if(hostString.indexOf('/') == -1) {
|
||||
hosts.push(hostString.trim());
|
||||
}
|
||||
}
|
||||
|
||||
for(var i = 0; i < hosts.length; i++) {
|
||||
var r = parser.parse(f('mongodb://%s', hosts[i].trim()));
|
||||
if(r.path && r.path.indexOf(':') != -1) {
|
||||
throw new Error('double colon in host identifier');
|
||||
}
|
||||
}
|
||||
|
||||
// If we have a ? mark cut the query elements off
|
||||
if(url.indexOf("?") != -1) {
|
||||
query_string_part = url.substr(url.indexOf("?") + 1);
|
||||
connection_part = url.substring("mongodb://".length, url.indexOf("?"))
|
||||
} else {
|
||||
connection_part = url.substring("mongodb://".length);
|
||||
}
|
||||
|
||||
// Check if we have auth params
|
||||
if(connection_part.indexOf("@") != -1) {
|
||||
auth_part = connection_part.split("@")[0];
|
||||
connection_part = connection_part.split("@")[1];
|
||||
}
|
||||
|
||||
// Check if the connection string has a db
|
||||
if(connection_part.indexOf(".sock") != -1) {
|
||||
if(connection_part.indexOf(".sock/") != -1) {
|
||||
dbName = connection_part.split(".sock/")[1];
|
||||
connection_part = connection_part.split("/", connection_part.indexOf(".sock") + ".sock".length);
|
||||
}
|
||||
} else if(connection_part.indexOf("/") != -1) {
|
||||
dbName = connection_part.split("/")[1];
|
||||
connection_part = connection_part.split("/")[0];
|
||||
}
|
||||
|
||||
// Result object
|
||||
var object = {};
|
||||
|
||||
// Pick apart the authentication part of the string
|
||||
var authPart = auth_part || '';
|
||||
var auth = authPart.split(':', 2);
|
||||
|
||||
// Decode the URI components
|
||||
auth[0] = decodeURIComponent(auth[0]);
|
||||
if(auth[1]){
|
||||
auth[1] = decodeURIComponent(auth[1]);
|
||||
}
|
||||
|
||||
// Add auth to final object if we have 2 elements
|
||||
if(auth.length == 2) object.auth = {user: auth[0], password: auth[1]};
|
||||
|
||||
// Variables used for temporary storage
|
||||
var hostPart;
|
||||
var urlOptions;
|
||||
var servers;
|
||||
var serverOptions = {socketOptions: {}};
|
||||
var dbOptions = {read_preference_tags: []};
|
||||
var replSetServersOptions = {socketOptions: {}};
|
||||
// Add server options to final object
|
||||
object.server_options = serverOptions;
|
||||
object.db_options = dbOptions;
|
||||
object.rs_options = replSetServersOptions;
|
||||
object.mongos_options = {};
|
||||
|
||||
// Let's check if we are using a domain socket
|
||||
if(url.match(/\.sock/)) {
|
||||
// Split out the socket part
|
||||
var domainSocket = url.substring(
|
||||
url.indexOf("mongodb://") + "mongodb://".length
|
||||
, url.lastIndexOf(".sock") + ".sock".length);
|
||||
// Clean out any auth stuff if any
|
||||
if(domainSocket.indexOf("@") != -1) domainSocket = domainSocket.split("@")[1];
|
||||
servers = [{domain_socket: domainSocket}];
|
||||
} else {
|
||||
// Split up the db
|
||||
hostPart = connection_part;
|
||||
// Deduplicate servers
|
||||
var deduplicatedServers = {};
|
||||
|
||||
// Parse all server results
|
||||
servers = hostPart.split(',').map(function(h) {
|
||||
var _host, _port, ipv6match;
|
||||
//check if it matches [IPv6]:port, where the port number is optional
|
||||
if ((ipv6match = /\[([^\]]+)\](?:\:(.+))?/.exec(h))) {
|
||||
_host = ipv6match[1];
|
||||
_port = parseInt(ipv6match[2], 10) || 27017;
|
||||
} else {
|
||||
//otherwise assume it's IPv4, or plain hostname
|
||||
var hostPort = h.split(':', 2);
|
||||
_host = hostPort[0] || 'localhost';
|
||||
_port = hostPort[1] != null ? parseInt(hostPort[1], 10) : 27017;
|
||||
// Check for localhost?safe=true style case
|
||||
if(_host.indexOf("?") != -1) _host = _host.split(/\?/)[0];
|
||||
}
|
||||
|
||||
// No entry returned for duplicate servr
|
||||
if(deduplicatedServers[_host + "_" + _port]) return null;
|
||||
deduplicatedServers[_host + "_" + _port] = 1;
|
||||
|
||||
// Return the mapped object
|
||||
return {host: _host, port: _port};
|
||||
}).filter(function(x) {
|
||||
return x != null;
|
||||
});
|
||||
}
|
||||
|
||||
// Get the db name
|
||||
object.dbName = dbName || 'admin';
|
||||
// Split up all the options
|
||||
urlOptions = (query_string_part || '').split(/[&;]/);
|
||||
// Ugh, we have to figure out which options go to which constructor manually.
|
||||
urlOptions.forEach(function(opt) {
|
||||
if(!opt) return;
|
||||
var splitOpt = opt.split('='), name = splitOpt[0], value = splitOpt[1];
|
||||
// Options implementations
|
||||
switch(name) {
|
||||
case 'slaveOk':
|
||||
case 'slave_ok':
|
||||
serverOptions.slave_ok = (value == 'true');
|
||||
dbOptions.slaveOk = (value == 'true');
|
||||
break;
|
||||
case 'maxPoolSize':
|
||||
case 'poolSize':
|
||||
serverOptions.poolSize = parseInt(value, 10);
|
||||
replSetServersOptions.poolSize = parseInt(value, 10);
|
||||
break;
|
||||
case 'autoReconnect':
|
||||
case 'auto_reconnect':
|
||||
serverOptions.auto_reconnect = (value == 'true');
|
||||
break;
|
||||
case 'minPoolSize':
|
||||
throw new Error("minPoolSize not supported");
|
||||
case 'maxIdleTimeMS':
|
||||
throw new Error("maxIdleTimeMS not supported");
|
||||
case 'waitQueueMultiple':
|
||||
throw new Error("waitQueueMultiple not supported");
|
||||
case 'waitQueueTimeoutMS':
|
||||
throw new Error("waitQueueTimeoutMS not supported");
|
||||
case 'uuidRepresentation':
|
||||
throw new Error("uuidRepresentation not supported");
|
||||
case 'ssl':
|
||||
if(value == 'prefer') {
|
||||
serverOptions.ssl = value;
|
||||
replSetServersOptions.ssl = value;
|
||||
break;
|
||||
}
|
||||
serverOptions.ssl = (value == 'true');
|
||||
replSetServersOptions.ssl = (value == 'true');
|
||||
break;
|
||||
case 'sslValidate':
|
||||
serverOptions.sslValidate = (value == 'true');
|
||||
replSetServerOptions.sslValidate = (value == 'true');
|
||||
break;
|
||||
case 'replicaSet':
|
||||
case 'rs_name':
|
||||
replSetServersOptions.rs_name = value;
|
||||
break;
|
||||
case 'reconnectWait':
|
||||
replSetServersOptions.reconnectWait = parseInt(value, 10);
|
||||
break;
|
||||
case 'retries':
|
||||
replSetServersOptions.retries = parseInt(value, 10);
|
||||
break;
|
||||
case 'readSecondary':
|
||||
case 'read_secondary':
|
||||
replSetServersOptions.read_secondary = (value == 'true');
|
||||
break;
|
||||
case 'fsync':
|
||||
dbOptions.fsync = (value == 'true');
|
||||
break;
|
||||
case 'journal':
|
||||
dbOptions.j = (value == 'true');
|
||||
break;
|
||||
case 'safe':
|
||||
dbOptions.safe = (value == 'true');
|
||||
break;
|
||||
case 'nativeParser':
|
||||
case 'native_parser':
|
||||
dbOptions.native_parser = (value == 'true');
|
||||
break;
|
||||
case 'readConcernLevel':
|
||||
dbOptions.readConcern = {level: value};
|
||||
break;
|
||||
case 'connectTimeoutMS':
|
||||
serverOptions.socketOptions.connectTimeoutMS = parseInt(value, 10);
|
||||
replSetServersOptions.socketOptions.connectTimeoutMS = parseInt(value, 10);
|
||||
break;
|
||||
case 'socketTimeoutMS':
|
||||
serverOptions.socketOptions.socketTimeoutMS = parseInt(value, 10);
|
||||
replSetServersOptions.socketOptions.socketTimeoutMS = parseInt(value, 10);
|
||||
break;
|
||||
case 'w':
|
||||
dbOptions.w = parseInt(value, 10);
|
||||
if(isNaN(dbOptions.w)) dbOptions.w = value;
|
||||
break;
|
||||
case 'authSource':
|
||||
dbOptions.authSource = value;
|
||||
break;
|
||||
case 'gssapiServiceName':
|
||||
dbOptions.gssapiServiceName = value;
|
||||
break;
|
||||
case 'authMechanism':
|
||||
if(value == 'GSSAPI') {
|
||||
// If no password provided decode only the principal
|
||||
if(object.auth == null) {
|
||||
var urlDecodeAuthPart = decodeURIComponent(authPart);
|
||||
if(urlDecodeAuthPart.indexOf("@") == -1) throw new Error("GSSAPI requires a provided principal");
|
||||
object.auth = {user: urlDecodeAuthPart, password: null};
|
||||
} else {
|
||||
object.auth.user = decodeURIComponent(object.auth.user);
|
||||
}
|
||||
} else if(value == 'MONGODB-X509') {
|
||||
object.auth = {user: decodeURIComponent(authPart)};
|
||||
}
|
||||
|
||||
// Only support GSSAPI or MONGODB-CR for now
|
||||
if(value != 'GSSAPI'
|
||||
&& value != 'MONGODB-X509'
|
||||
&& value != 'MONGODB-CR'
|
||||
&& value != 'DEFAULT'
|
||||
&& value != 'SCRAM-SHA-1'
|
||||
&& value != 'PLAIN')
|
||||
throw new Error("only DEFAULT, GSSAPI, PLAIN, MONGODB-X509, SCRAM-SHA-1 or MONGODB-CR is supported by authMechanism");
|
||||
|
||||
// Authentication mechanism
|
||||
dbOptions.authMechanism = value;
|
||||
break;
|
||||
case 'authMechanismProperties':
|
||||
// Split up into key, value pairs
|
||||
var values = value.split(',');
|
||||
var o = {};
|
||||
// For each value split into key, value
|
||||
values.forEach(function(x) {
|
||||
var v = x.split(':');
|
||||
o[v[0]] = v[1];
|
||||
});
|
||||
|
||||
// Set all authMechanismProperties
|
||||
dbOptions.authMechanismProperties = o;
|
||||
// Set the service name value
|
||||
if(typeof o.SERVICE_NAME == 'string') dbOptions.gssapiServiceName = o.SERVICE_NAME;
|
||||
break;
|
||||
case 'wtimeoutMS':
|
||||
dbOptions.wtimeout = parseInt(value, 10);
|
||||
break;
|
||||
case 'readPreference':
|
||||
if(!ReadPreference.isValid(value)) throw new Error("readPreference must be either primary/primaryPreferred/secondary/secondaryPreferred/nearest");
|
||||
dbOptions.readPreference = value;
|
||||
break;
|
||||
case 'readPreferenceTags':
|
||||
// Decode the value
|
||||
value = decodeURIComponent(value);
|
||||
// Contains the tag object
|
||||
var tagObject = {};
|
||||
if(value == null || value == '') {
|
||||
dbOptions.read_preference_tags.push(tagObject);
|
||||
break;
|
||||
}
|
||||
|
||||
// Split up the tags
|
||||
var tags = value.split(/\,/);
|
||||
for(var i = 0; i < tags.length; i++) {
|
||||
var parts = tags[i].trim().split(/\:/);
|
||||
tagObject[parts[0]] = parts[1];
|
||||
}
|
||||
|
||||
// Set the preferences tags
|
||||
dbOptions.read_preference_tags.push(tagObject);
|
||||
break;
|
||||
default:
|
||||
break;
|
||||
}
|
||||
});
|
||||
|
||||
// No tags: should be null (not [])
|
||||
if(dbOptions.read_preference_tags.length === 0) {
|
||||
dbOptions.read_preference_tags = null;
|
||||
}
|
||||
|
||||
// Validate if there are an invalid write concern combinations
|
||||
if((dbOptions.w == -1 || dbOptions.w == 0) && (
|
||||
dbOptions.journal == true
|
||||
|| dbOptions.fsync == true
|
||||
|| dbOptions.safe == true)) throw new Error("w set to -1 or 0 cannot be combined with safe/w/journal/fsync")
|
||||
|
||||
// If no read preference set it to primary
|
||||
if(!dbOptions.readPreference) {
|
||||
dbOptions.readPreference = 'primary';
|
||||
}
|
||||
|
||||
// Add servers to result
|
||||
object.servers = servers;
|
||||
// Returned parsed object
|
||||
return object;
|
||||
}
|
||||
234
node_modules/mongodb/lib/utils.js
generated
vendored
Normal file
234
node_modules/mongodb/lib/utils.js
generated
vendored
Normal file
@@ -0,0 +1,234 @@
|
||||
"use strict";
|
||||
|
||||
var MongoError = require('mongodb-core').MongoError,
|
||||
f = require('util').format;
|
||||
|
||||
var shallowClone = function(obj) {
|
||||
var copy = {};
|
||||
for(var name in obj) copy[name] = obj[name];
|
||||
return copy;
|
||||
}
|
||||
|
||||
// Set simple property
|
||||
var getSingleProperty = function(obj, name, value) {
|
||||
Object.defineProperty(obj, name, {
|
||||
enumerable:true,
|
||||
get: function() {
|
||||
return value
|
||||
}
|
||||
});
|
||||
}
|
||||
|
||||
var formatSortValue = exports.formatSortValue = function(sortDirection) {
|
||||
var value = ("" + sortDirection).toLowerCase();
|
||||
|
||||
switch (value) {
|
||||
case 'ascending':
|
||||
case 'asc':
|
||||
case '1':
|
||||
return 1;
|
||||
case 'descending':
|
||||
case 'desc':
|
||||
case '-1':
|
||||
return -1;
|
||||
default:
|
||||
throw new Error("Illegal sort clause, must be of the form "
|
||||
+ "[['field1', '(ascending|descending)'], "
|
||||
+ "['field2', '(ascending|descending)']]");
|
||||
}
|
||||
};
|
||||
|
||||
var formattedOrderClause = exports.formattedOrderClause = function(sortValue) {
|
||||
var orderBy = {};
|
||||
if(sortValue == null) return null;
|
||||
if (Array.isArray(sortValue)) {
|
||||
if(sortValue.length === 0) {
|
||||
return null;
|
||||
}
|
||||
|
||||
for(var i = 0; i < sortValue.length; i++) {
|
||||
if(sortValue[i].constructor == String) {
|
||||
orderBy[sortValue[i]] = 1;
|
||||
} else {
|
||||
orderBy[sortValue[i][0]] = formatSortValue(sortValue[i][1]);
|
||||
}
|
||||
}
|
||||
} else if(sortValue != null && typeof sortValue == 'object') {
|
||||
orderBy = sortValue;
|
||||
} else if (typeof sortValue == 'string') {
|
||||
orderBy[sortValue] = 1;
|
||||
} else {
|
||||
throw new Error("Illegal sort clause, must be of the form " +
|
||||
"[['field1', '(ascending|descending)'], ['field2', '(ascending|descending)']]");
|
||||
}
|
||||
|
||||
return orderBy;
|
||||
};
|
||||
|
||||
var checkCollectionName = function checkCollectionName (collectionName) {
|
||||
if('string' !== typeof collectionName) {
|
||||
throw Error("collection name must be a String");
|
||||
}
|
||||
|
||||
if(!collectionName || collectionName.indexOf('..') != -1) {
|
||||
throw Error("collection names cannot be empty");
|
||||
}
|
||||
|
||||
if(collectionName.indexOf('$') != -1 &&
|
||||
collectionName.match(/((^\$cmd)|(oplog\.\$main))/) == null) {
|
||||
throw Error("collection names must not contain '$'");
|
||||
}
|
||||
|
||||
if(collectionName.match(/^\.|\.$/) != null) {
|
||||
throw Error("collection names must not start or end with '.'");
|
||||
}
|
||||
|
||||
// Validate that we are not passing 0x00 in the colletion name
|
||||
if(!!~collectionName.indexOf("\x00")) {
|
||||
throw new Error("collection names cannot contain a null character");
|
||||
}
|
||||
};
|
||||
|
||||
var handleCallback = function(callback, err, value1, value2) {
|
||||
try {
|
||||
if(callback == null) return;
|
||||
if(value2) return callback(err, value1, value2);
|
||||
return callback(err, value1);
|
||||
} catch(err) {
|
||||
process.nextTick(function() { throw err; });
|
||||
return false;
|
||||
}
|
||||
|
||||
return true;
|
||||
}
|
||||
|
||||
/**
|
||||
* Wrap a Mongo error document in an Error instance
|
||||
* @ignore
|
||||
* @api private
|
||||
*/
|
||||
var toError = function(error) {
|
||||
if (error instanceof Error) return error;
|
||||
|
||||
var msg = error.err || error.errmsg || error.errMessage || error;
|
||||
var e = MongoError.create({message: msg, driver:true});
|
||||
|
||||
// Get all object keys
|
||||
var keys = typeof error == 'object'
|
||||
? Object.keys(error)
|
||||
: [];
|
||||
|
||||
for(var i = 0; i < keys.length; i++) {
|
||||
e[keys[i]] = error[keys[i]];
|
||||
}
|
||||
|
||||
return e;
|
||||
}
|
||||
|
||||
/**
|
||||
* @ignore
|
||||
*/
|
||||
var normalizeHintField = function normalizeHintField(hint) {
|
||||
var finalHint = null;
|
||||
|
||||
if(typeof hint == 'string') {
|
||||
finalHint = hint;
|
||||
} else if(Array.isArray(hint)) {
|
||||
finalHint = {};
|
||||
|
||||
hint.forEach(function(param) {
|
||||
finalHint[param] = 1;
|
||||
});
|
||||
} else if(hint != null && typeof hint == 'object') {
|
||||
finalHint = {};
|
||||
for (var name in hint) {
|
||||
finalHint[name] = hint[name];
|
||||
}
|
||||
}
|
||||
|
||||
return finalHint;
|
||||
};
|
||||
|
||||
/**
|
||||
* Create index name based on field spec
|
||||
*
|
||||
* @ignore
|
||||
* @api private
|
||||
*/
|
||||
var parseIndexOptions = function(fieldOrSpec) {
|
||||
var fieldHash = {};
|
||||
var indexes = [];
|
||||
var keys;
|
||||
|
||||
// Get all the fields accordingly
|
||||
if('string' == typeof fieldOrSpec) {
|
||||
// 'type'
|
||||
indexes.push(fieldOrSpec + '_' + 1);
|
||||
fieldHash[fieldOrSpec] = 1;
|
||||
} else if(Array.isArray(fieldOrSpec)) {
|
||||
fieldOrSpec.forEach(function(f) {
|
||||
if('string' == typeof f) {
|
||||
// [{location:'2d'}, 'type']
|
||||
indexes.push(f + '_' + 1);
|
||||
fieldHash[f] = 1;
|
||||
} else if(Array.isArray(f)) {
|
||||
// [['location', '2d'],['type', 1]]
|
||||
indexes.push(f[0] + '_' + (f[1] || 1));
|
||||
fieldHash[f[0]] = f[1] || 1;
|
||||
} else if(isObject(f)) {
|
||||
// [{location:'2d'}, {type:1}]
|
||||
keys = Object.keys(f);
|
||||
keys.forEach(function(k) {
|
||||
indexes.push(k + '_' + f[k]);
|
||||
fieldHash[k] = f[k];
|
||||
});
|
||||
} else {
|
||||
// undefined (ignore)
|
||||
}
|
||||
});
|
||||
} else if(isObject(fieldOrSpec)) {
|
||||
// {location:'2d', type:1}
|
||||
keys = Object.keys(fieldOrSpec);
|
||||
keys.forEach(function(key) {
|
||||
indexes.push(key + '_' + fieldOrSpec[key]);
|
||||
fieldHash[key] = fieldOrSpec[key];
|
||||
});
|
||||
}
|
||||
|
||||
return {
|
||||
name: indexes.join("_"), keys: keys, fieldHash: fieldHash
|
||||
}
|
||||
}
|
||||
|
||||
var isObject = exports.isObject = function (arg) {
|
||||
return '[object Object]' == toString.call(arg)
|
||||
}
|
||||
|
||||
var debugOptions = function(debugFields, options) {
|
||||
var finaloptions = {};
|
||||
debugFields.forEach(function(n) {
|
||||
finaloptions[n] = options[n];
|
||||
});
|
||||
|
||||
return finaloptions;
|
||||
}
|
||||
|
||||
var decorateCommand = function(command, options, exclude) {
|
||||
for(var name in options) {
|
||||
if(exclude[name] == null) command[name] = options[name];
|
||||
}
|
||||
|
||||
return command;
|
||||
}
|
||||
|
||||
exports.shallowClone = shallowClone;
|
||||
exports.getSingleProperty = getSingleProperty;
|
||||
exports.checkCollectionName = checkCollectionName;
|
||||
exports.toError = toError;
|
||||
exports.formattedOrderClause = formattedOrderClause;
|
||||
exports.parseIndexOptions = parseIndexOptions;
|
||||
exports.normalizeHintField = normalizeHintField;
|
||||
exports.handleCallback = handleCallback;
|
||||
exports.decorateCommand = decorateCommand;
|
||||
exports.isObject = isObject;
|
||||
exports.debugOptions = debugOptions;
|
||||
427
node_modules/mongodb/mongolabs.js
generated
vendored
Normal file
427
node_modules/mongodb/mongolabs.js
generated
vendored
Normal file
@@ -0,0 +1,427 @@
|
||||
/*
|
||||
* PROPRIETARY AND CONFIDENTIAL
|
||||
*
|
||||
* The contents of this example are proprietary and confidential, and may not be used or distributed without express
|
||||
* written permission from MongoLab.
|
||||
*
|
||||
*/
|
||||
|
||||
var mongodb = require("mongodb");
|
||||
var async = require("async");
|
||||
|
||||
var ObjectId = mongodb.ObjectID;
|
||||
|
||||
//mongodb.Logger.setLevel("debug");
|
||||
var uri = process.env.MONGODB_URI;
|
||||
|
||||
function log(s) {
|
||||
console.log(new Date().toISOString() + " " + s);
|
||||
}
|
||||
|
||||
log("Connecting...");
|
||||
mongodb.MongoClient.connect(
|
||||
uri,
|
||||
{
|
||||
replSet: {
|
||||
//poolSize: 10
|
||||
socketOptions: {
|
||||
connectTimeoutMS: 60 * 1000
|
||||
}
|
||||
}
|
||||
},
|
||||
function (err, db) {
|
||||
if (err) {
|
||||
log("Error trying to connect.");
|
||||
log(err.stack);
|
||||
} else {
|
||||
log("Connected.");
|
||||
db.serverConfig.on('joined', function(t, s) {
|
||||
log("joined :: " + t + " :: " + s.name);
|
||||
});
|
||||
db.serverConfig.on('left', function(t, s) {
|
||||
log("joined :: " + t + " :: " + s.name);
|
||||
});
|
||||
db.serverConfig.on('timeout', function(err) {
|
||||
log("timeout :: ");
|
||||
log(err.stack);
|
||||
});
|
||||
log("Running aggregations...");
|
||||
async.forEachOf(
|
||||
aggregations,
|
||||
function (item, i, cb) {
|
||||
log("Running aggregation " + i + "...");
|
||||
db.collection(item.collection).aggregate(item.pipeline, function (err, result) {
|
||||
if (err) {
|
||||
log("Error running aggregation " + i + ".");
|
||||
log(err.stack)
|
||||
} else {
|
||||
log("Done running aggregation " + i + ".");
|
||||
}
|
||||
cb();
|
||||
});
|
||||
},
|
||||
function (err) {
|
||||
if (err) {
|
||||
log("Untrapped error.");
|
||||
log(err.stack);
|
||||
} else {
|
||||
log("Finished running all aggregations.")
|
||||
}
|
||||
log("Closing database...");
|
||||
db.close();
|
||||
}
|
||||
)
|
||||
}
|
||||
}
|
||||
);
|
||||
|
||||
var aggregations = [
|
||||
// ========== 1 ==========
|
||||
{
|
||||
collection: "tlogs",
|
||||
pipeline: [
|
||||
{
|
||||
"$match": {
|
||||
"_type": "tlog",
|
||||
"organization": ObjectId("53eb1ee2e6c77111203d8503"),
|
||||
"businessDate": {
|
||||
"$gte": ISODate("2015-11-01T00:00:00.000+0000"),
|
||||
"$lt": ISODate("2015-11-28T23:59:59.999+0000")
|
||||
}
|
||||
}
|
||||
},
|
||||
{
|
||||
"$unwind": "$order"
|
||||
},
|
||||
{
|
||||
"$unwind": "$order.orderedOffers"
|
||||
},
|
||||
{
|
||||
"$redact": {
|
||||
"$cond": {
|
||||
"if": {
|
||||
"$not": "$order.orderedOffers.onTheHouse"
|
||||
},
|
||||
"then": "$$KEEP",
|
||||
"else": "$$PRUNE"
|
||||
}
|
||||
}
|
||||
},
|
||||
{
|
||||
"$redact": {
|
||||
"$cond": {
|
||||
"if": {
|
||||
"$not": "$order.orderedOffers.cancellation"
|
||||
},
|
||||
"then": "$$KEEP",
|
||||
"else": "$$PRUNE"
|
||||
}
|
||||
}
|
||||
},
|
||||
{
|
||||
"$group": {
|
||||
"_id": "$order.orderedOffers.offer",
|
||||
"offer": {
|
||||
"$first": "$order.orderedOffers.offer"
|
||||
},
|
||||
"menu": {
|
||||
"$first": "$order.orderedOffers.menu"
|
||||
},
|
||||
"totalAmount": {
|
||||
"$sum": "$order.orderedOffers.amount"
|
||||
},
|
||||
"totalCount": {
|
||||
"$sum": NumberInt(1)
|
||||
},
|
||||
"totalDiners": {
|
||||
"$sum": "$diners"
|
||||
}
|
||||
}
|
||||
}
|
||||
]
|
||||
},
|
||||
// ========== 2 ==========
|
||||
{
|
||||
collection: "tlogs",
|
||||
pipeline: [
|
||||
{
|
||||
"$match": {
|
||||
"_type": "tlog",
|
||||
"organization": ObjectId("53eb1ee2e6c77111203d8503"),
|
||||
"businessDate": {
|
||||
"$gte": ISODate("2015-11-01T00:00:00.000+0000"),
|
||||
"$lt": ISODate("2015-11-28T23:59:59.999+0000")
|
||||
}
|
||||
}
|
||||
},
|
||||
{
|
||||
"$unwind": "$order"
|
||||
},
|
||||
{
|
||||
"$unwind": "$order.orderedItems"
|
||||
},
|
||||
{
|
||||
"$redact": {
|
||||
"$cond": {
|
||||
"if": {
|
||||
"$not": "$order.orderedItems.onTheHouse"
|
||||
},
|
||||
"then": "$$PRUNE",
|
||||
"else": "$$KEEP"
|
||||
}
|
||||
}
|
||||
},
|
||||
{
|
||||
"$group": {
|
||||
"_id": {
|
||||
"category": "$order.orderedItems.category"
|
||||
},
|
||||
"totalItems": {
|
||||
"$sum": NumberInt(1)
|
||||
},
|
||||
"totalAmount": {
|
||||
"$sum": "$order.orderedItems.price"
|
||||
}
|
||||
}
|
||||
}
|
||||
]
|
||||
},
|
||||
// ========== 3 ==========
|
||||
{
|
||||
collection: "tlogs",
|
||||
pipeline: [
|
||||
{
|
||||
"$match": {
|
||||
"_type": "tlog",
|
||||
"organization": ObjectId("53eb1ee2e6c77111203d8503"),
|
||||
"businessDate": {
|
||||
"$gte": ISODate("2015-11-01T00:00:00.000+0000"),
|
||||
"$lt": ISODate("2015-11-28T23:59:59.999+0000")
|
||||
},
|
||||
"order": {
|
||||
"$elemMatch": {
|
||||
"onTheHouse": null
|
||||
}
|
||||
}
|
||||
}
|
||||
},
|
||||
{
|
||||
"$unwind": "$order"
|
||||
},
|
||||
{
|
||||
"$unwind": "$order.tips"
|
||||
},
|
||||
{
|
||||
"$project": {
|
||||
"_id": "$order.tips._id",
|
||||
"tableTip": {
|
||||
"$cond": [
|
||||
{
|
||||
"$eq": [
|
||||
{
|
||||
"$size": {
|
||||
"$setIntersection": [
|
||||
"$order.tableIds",
|
||||
[
|
||||
ObjectId("53eb1ee4e6c77111203d852e"),
|
||||
ObjectId("53eb1ee4e6c77111203d852f"),
|
||||
ObjectId("53eb1ee4e6c77111203d8530"),
|
||||
ObjectId("53eb1ee4e6c77111203d8531"),
|
||||
ObjectId("53eb1ee4e6c77111203d8532"),
|
||||
ObjectId("53eb1ee4e6c77111203d8533"),
|
||||
ObjectId("53eb1ee4e6c77111203d8534"),
|
||||
ObjectId("53eb1ee4e6c77111203d8535"),
|
||||
ObjectId("53eb1ee4e6c77111203d8536"),
|
||||
ObjectId("53eb1ee4e6c77111203d8537"),
|
||||
ObjectId("53eb1ee4e6c77111203d8538"),
|
||||
ObjectId("53eb1ee4e6c77111203d8539"),
|
||||
ObjectId("53eb1ee4e6c77111203d853a"),
|
||||
ObjectId("53eb1ee4e6c77111203d853b"),
|
||||
ObjectId("53eb1ee4e6c77111203d853c"),
|
||||
ObjectId("53eb1ee4e6c77111203d853d"),
|
||||
ObjectId("53eb1ee4e6c77111203d853e"),
|
||||
ObjectId("53eb1ee4e6c77111203d853f"),
|
||||
ObjectId("53eb1ee4e6c77111203d8540"),
|
||||
ObjectId("53eb1ee4e6c77111203d8541"),
|
||||
ObjectId("53eb1ee4e6c77111203d8542"),
|
||||
ObjectId("53eb1ee4e6c77111203d8543"),
|
||||
ObjectId("53eb1ee4e6c77111203d8544"),
|
||||
ObjectId("53eb1ee4e6c77111203d8545"),
|
||||
ObjectId("53eb1ee4e6c77111203d8546"),
|
||||
ObjectId("53eb1ee4e6c77111203d8547"),
|
||||
ObjectId("53eb1ee4e6c77111203d8548"),
|
||||
ObjectId("53eb1ee4e6c77111203d8549"),
|
||||
ObjectId("53eb1ee4e6c77111203d854a"),
|
||||
ObjectId("53eb1ee4e6c77111203d854b"),
|
||||
ObjectId("53eb1ee4e6c77111203d854c"),
|
||||
ObjectId("53eb1ee4e6c77111203d854d"),
|
||||
ObjectId("53eb1ee4e6c77111203d854e"),
|
||||
ObjectId("53eb1ee4e6c77111203d854f"),
|
||||
ObjectId("53eb1ee4e6c77111203d8550"),
|
||||
ObjectId("53eb1ee4e6c77111203d8551"),
|
||||
ObjectId("53eb1ee4e6c77111203d8552"),
|
||||
ObjectId("53eb1ee4e6c77111203d8553"),
|
||||
ObjectId("53eb1ee4e6c77111203d8554"),
|
||||
ObjectId("53eb1ee4e6c77111203d8555")
|
||||
]
|
||||
]
|
||||
}
|
||||
},
|
||||
NumberInt(0)
|
||||
]
|
||||
},
|
||||
false,
|
||||
true
|
||||
]
|
||||
},
|
||||
"amount": "$order.tips.amount"
|
||||
}
|
||||
},
|
||||
{
|
||||
"$group": {
|
||||
"_id": "$tableTip",
|
||||
"tableTip": {
|
||||
"$first": "$tableTip"
|
||||
},
|
||||
"totalAmount": {
|
||||
"$sum": "$amount"
|
||||
},
|
||||
"totalCount": {
|
||||
"$sum": NumberInt(1)
|
||||
}
|
||||
}
|
||||
}
|
||||
]
|
||||
},
|
||||
// ========== 4 ==========
|
||||
{
|
||||
collection: "tlogs",
|
||||
pipeline: [
|
||||
{
|
||||
"$match": {
|
||||
"_type": "tlog",
|
||||
"organization": ObjectId("53eb1ee2e6c77111203d8503"),
|
||||
"businessDate": {
|
||||
"$gte": ISODate("2015-11-01T00:00:00.000+0000"),
|
||||
"$lt": ISODate("2015-11-28T23:59:59.999+0000")
|
||||
}
|
||||
}
|
||||
},
|
||||
{
|
||||
"$unwind": "$order"
|
||||
},
|
||||
{
|
||||
"$group": {
|
||||
"_id": {
|
||||
"orderType": "$order.orderType"
|
||||
},
|
||||
"orderType": {
|
||||
"$first": "$order.orderType"
|
||||
},
|
||||
"diners": {
|
||||
"$sum": "$diners"
|
||||
},
|
||||
"totalCount": {
|
||||
"$sum": NumberInt(1)
|
||||
}
|
||||
}
|
||||
}
|
||||
]
|
||||
},
|
||||
// 5, 6, and 7 deleted by Akira just to reduce character count within this JIRA comment.
|
||||
// ========== 8 ==========
|
||||
{
|
||||
collection: "tlogs",
|
||||
pipeline: [
|
||||
{
|
||||
"$match": {
|
||||
"_type": "tlog",
|
||||
"organization": ObjectId("53eb1ee2e6c77111203d8503"),
|
||||
"businessDate": {
|
||||
"$gte": ISODate("2015-10-01T00:00:00.000+0000"),
|
||||
"$lt": ISODate("2015-11-28T23:59:59.999+0000")
|
||||
}
|
||||
}
|
||||
},
|
||||
{
|
||||
"$unwind": "$order"
|
||||
},
|
||||
{
|
||||
"$unwind": "$order.orderedItems"
|
||||
},
|
||||
{
|
||||
"$redact": {
|
||||
"$cond": {
|
||||
"if": {
|
||||
"$and": [
|
||||
{
|
||||
"$not": "$order.orderedItems.cancellation"
|
||||
},
|
||||
{
|
||||
"$not": "$order.orderedItems.onTheHouse"
|
||||
}
|
||||
]
|
||||
},
|
||||
"then": "$$KEEP",
|
||||
"else": "$$KEEP"
|
||||
}
|
||||
}
|
||||
},
|
||||
{
|
||||
"$unwind": "$order.orderedItems.selectedModifiers"
|
||||
},
|
||||
{
|
||||
"$group": {
|
||||
"_id": {
|
||||
"category": "$order.orderedItems.category"
|
||||
},
|
||||
"totalAmount": {
|
||||
"$sum": "$order.orderedItems.selectedModifiers.price"
|
||||
}
|
||||
}
|
||||
}
|
||||
]
|
||||
},
|
||||
// ========== 9 ==========
|
||||
{
|
||||
collection: "tlogs",
|
||||
pipeline: [
|
||||
{
|
||||
"$match": {
|
||||
"_type": "tlog",
|
||||
"organization": ObjectId("53eb1ee2e6c77111203d8503"),
|
||||
"businessDate": {
|
||||
"$gte": ISODate("2015-10-01T00:00:00.000+0000"),
|
||||
"$lt": ISODate("2015-11-28T23:59:59.999+0000")
|
||||
},
|
||||
"order.orderType": "Refund"
|
||||
}
|
||||
},
|
||||
{
|
||||
"$unwind": "$order"
|
||||
},
|
||||
{
|
||||
"$unwind": "$order.payments"
|
||||
},
|
||||
{
|
||||
"$group": {
|
||||
"_id": null,
|
||||
"totalCount": {
|
||||
"$sum": NumberInt(1)
|
||||
},
|
||||
"totalAmount": {
|
||||
"$sum": "$order.payments.amount"
|
||||
}
|
||||
}
|
||||
}
|
||||
]
|
||||
}
|
||||
];
|
||||
|
||||
function ISODate(x) {
|
||||
return new Date(x);
|
||||
}
|
||||
|
||||
function NumberInt(x) {
|
||||
return x;
|
||||
}
|
||||
5
node_modules/mongodb/node_modules/readable-stream/.npmignore
generated
vendored
Normal file
5
node_modules/mongodb/node_modules/readable-stream/.npmignore
generated
vendored
Normal file
@@ -0,0 +1,5 @@
|
||||
build/
|
||||
test/
|
||||
examples/
|
||||
fs.js
|
||||
zlib.js
|
||||
27
node_modules/mongodb/node_modules/readable-stream/LICENSE
generated
vendored
Normal file
27
node_modules/mongodb/node_modules/readable-stream/LICENSE
generated
vendored
Normal file
@@ -0,0 +1,27 @@
|
||||
Copyright (c) Isaac Z. Schlueter ("Author")
|
||||
All rights reserved.
|
||||
|
||||
The BSD License
|
||||
|
||||
Redistribution and use in source and binary forms, with or without
|
||||
modification, are permitted provided that the following conditions
|
||||
are met:
|
||||
|
||||
1. Redistributions of source code must retain the above copyright
|
||||
notice, this list of conditions and the following disclaimer.
|
||||
|
||||
2. Redistributions in binary form must reproduce the above copyright
|
||||
notice, this list of conditions and the following disclaimer in the
|
||||
documentation and/or other materials provided with the distribution.
|
||||
|
||||
THIS SOFTWARE IS PROVIDED BY THE AUTHOR AND CONTRIBUTORS ``AS IS'' AND
|
||||
ANY EXPRESS OR IMPLIED WARRANTIES, INCLUDING, BUT NOT LIMITED TO, THE
|
||||
IMPLIED WARRANTIES OF MERCHANTABILITY AND FITNESS FOR A PARTICULAR
|
||||
PURPOSE ARE DISCLAIMED. IN NO EVENT SHALL THE AUTHOR OR CONTRIBUTORS
|
||||
BE LIABLE FOR ANY DIRECT, INDIRECT, INCIDENTAL, SPECIAL, EXEMPLARY, OR
|
||||
CONSEQUENTIAL DAMAGES (INCLUDING, BUT NOT LIMITED TO, PROCUREMENT OF
|
||||
SUBSTITUTE GOODS OR SERVICES; LOSS OF USE, DATA, OR PROFITS; OR
|
||||
BUSINESS INTERRUPTION) HOWEVER CAUSED AND ON ANY THEORY OF LIABILITY,
|
||||
WHETHER IN CONTRACT, STRICT LIABILITY, OR TORT (INCLUDING NEGLIGENCE
|
||||
OR OTHERWISE) ARISING IN ANY WAY OUT OF THE USE OF THIS SOFTWARE, EVEN
|
||||
IF ADVISED OF THE POSSIBILITY OF SUCH DAMAGE.
|
||||
15
node_modules/mongodb/node_modules/readable-stream/README.md
generated
vendored
Normal file
15
node_modules/mongodb/node_modules/readable-stream/README.md
generated
vendored
Normal file
@@ -0,0 +1,15 @@
|
||||
# readable-stream
|
||||
|
||||
***Node-core streams for userland***
|
||||
|
||||
[](https://nodei.co/npm/readable-stream/)
|
||||
[](https://nodei.co/npm/readable-stream/)
|
||||
|
||||
This package is a mirror of the Streams2 and Streams3 implementations in Node-core.
|
||||
|
||||
If you want to guarantee a stable streams base, regardless of what version of Node you, or the users of your libraries are using, use **readable-stream** *only* and avoid the *"stream"* module in Node-core.
|
||||
|
||||
**readable-stream** comes in two major versions, v1.0.x and v1.1.x. The former tracks the Streams2 implementation in Node 0.10, including bug-fixes and minor improvements as they are added. The latter tracks Streams3 as it develops in Node 0.11; we will likely see a v1.2.x branch for Node 0.12.
|
||||
|
||||
**readable-stream** uses proper patch-level versioning so if you pin to `"~1.0.0"` you’ll get the latest Node 0.10 Streams2 implementation, including any fixes and minor non-breaking improvements. The patch-level versions of 1.0.x and 1.1.x should mirror the patch-level versions of Node-core releases. You should prefer the **1.0.x** releases for now and when you’re ready to start using Streams3, pin to `"~1.1.0"`
|
||||
|
||||
1
node_modules/mongodb/node_modules/readable-stream/duplex.js
generated
vendored
Normal file
1
node_modules/mongodb/node_modules/readable-stream/duplex.js
generated
vendored
Normal file
@@ -0,0 +1 @@
|
||||
module.exports = require("./lib/_stream_duplex.js")
|
||||
89
node_modules/mongodb/node_modules/readable-stream/lib/_stream_duplex.js
generated
vendored
Normal file
89
node_modules/mongodb/node_modules/readable-stream/lib/_stream_duplex.js
generated
vendored
Normal file
@@ -0,0 +1,89 @@
|
||||
// Copyright Joyent, Inc. and other Node contributors.
|
||||
//
|
||||
// Permission is hereby granted, free of charge, to any person obtaining a
|
||||
// copy of this software and associated documentation files (the
|
||||
// "Software"), to deal in the Software without restriction, including
|
||||
// without limitation the rights to use, copy, modify, merge, publish,
|
||||
// distribute, sublicense, and/or sell copies of the Software, and to permit
|
||||
// persons to whom the Software is furnished to do so, subject to the
|
||||
// following conditions:
|
||||
//
|
||||
// The above copyright notice and this permission notice shall be included
|
||||
// in all copies or substantial portions of the Software.
|
||||
//
|
||||
// THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS
|
||||
// OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
|
||||
// MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN
|
||||
// NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM,
|
||||
// DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR
|
||||
// OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE
|
||||
// USE OR OTHER DEALINGS IN THE SOFTWARE.
|
||||
|
||||
// a duplex stream is just a stream that is both readable and writable.
|
||||
// Since JS doesn't have multiple prototypal inheritance, this class
|
||||
// prototypally inherits from Readable, and then parasitically from
|
||||
// Writable.
|
||||
|
||||
module.exports = Duplex;
|
||||
|
||||
/*<replacement>*/
|
||||
var objectKeys = Object.keys || function (obj) {
|
||||
var keys = [];
|
||||
for (var key in obj) keys.push(key);
|
||||
return keys;
|
||||
}
|
||||
/*</replacement>*/
|
||||
|
||||
|
||||
/*<replacement>*/
|
||||
var util = require('core-util-is');
|
||||
util.inherits = require('inherits');
|
||||
/*</replacement>*/
|
||||
|
||||
var Readable = require('./_stream_readable');
|
||||
var Writable = require('./_stream_writable');
|
||||
|
||||
util.inherits(Duplex, Readable);
|
||||
|
||||
forEach(objectKeys(Writable.prototype), function(method) {
|
||||
if (!Duplex.prototype[method])
|
||||
Duplex.prototype[method] = Writable.prototype[method];
|
||||
});
|
||||
|
||||
function Duplex(options) {
|
||||
if (!(this instanceof Duplex))
|
||||
return new Duplex(options);
|
||||
|
||||
Readable.call(this, options);
|
||||
Writable.call(this, options);
|
||||
|
||||
if (options && options.readable === false)
|
||||
this.readable = false;
|
||||
|
||||
if (options && options.writable === false)
|
||||
this.writable = false;
|
||||
|
||||
this.allowHalfOpen = true;
|
||||
if (options && options.allowHalfOpen === false)
|
||||
this.allowHalfOpen = false;
|
||||
|
||||
this.once('end', onend);
|
||||
}
|
||||
|
||||
// the no-half-open enforcer
|
||||
function onend() {
|
||||
// if we allow half-open state, or if the writable side ended,
|
||||
// then we're ok.
|
||||
if (this.allowHalfOpen || this._writableState.ended)
|
||||
return;
|
||||
|
||||
// no more data can be written.
|
||||
// But allow more writes to happen in this tick.
|
||||
process.nextTick(this.end.bind(this));
|
||||
}
|
||||
|
||||
function forEach (xs, f) {
|
||||
for (var i = 0, l = xs.length; i < l; i++) {
|
||||
f(xs[i], i);
|
||||
}
|
||||
}
|
||||
46
node_modules/mongodb/node_modules/readable-stream/lib/_stream_passthrough.js
generated
vendored
Normal file
46
node_modules/mongodb/node_modules/readable-stream/lib/_stream_passthrough.js
generated
vendored
Normal file
@@ -0,0 +1,46 @@
|
||||
// Copyright Joyent, Inc. and other Node contributors.
|
||||
//
|
||||
// Permission is hereby granted, free of charge, to any person obtaining a
|
||||
// copy of this software and associated documentation files (the
|
||||
// "Software"), to deal in the Software without restriction, including
|
||||
// without limitation the rights to use, copy, modify, merge, publish,
|
||||
// distribute, sublicense, and/or sell copies of the Software, and to permit
|
||||
// persons to whom the Software is furnished to do so, subject to the
|
||||
// following conditions:
|
||||
//
|
||||
// The above copyright notice and this permission notice shall be included
|
||||
// in all copies or substantial portions of the Software.
|
||||
//
|
||||
// THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS
|
||||
// OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
|
||||
// MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN
|
||||
// NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM,
|
||||
// DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR
|
||||
// OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE
|
||||
// USE OR OTHER DEALINGS IN THE SOFTWARE.
|
||||
|
||||
// a passthrough stream.
|
||||
// basically just the most minimal sort of Transform stream.
|
||||
// Every written chunk gets output as-is.
|
||||
|
||||
module.exports = PassThrough;
|
||||
|
||||
var Transform = require('./_stream_transform');
|
||||
|
||||
/*<replacement>*/
|
||||
var util = require('core-util-is');
|
||||
util.inherits = require('inherits');
|
||||
/*</replacement>*/
|
||||
|
||||
util.inherits(PassThrough, Transform);
|
||||
|
||||
function PassThrough(options) {
|
||||
if (!(this instanceof PassThrough))
|
||||
return new PassThrough(options);
|
||||
|
||||
Transform.call(this, options);
|
||||
}
|
||||
|
||||
PassThrough.prototype._transform = function(chunk, encoding, cb) {
|
||||
cb(null, chunk);
|
||||
};
|
||||
982
node_modules/mongodb/node_modules/readable-stream/lib/_stream_readable.js
generated
vendored
Normal file
982
node_modules/mongodb/node_modules/readable-stream/lib/_stream_readable.js
generated
vendored
Normal file
@@ -0,0 +1,982 @@
|
||||
// Copyright Joyent, Inc. and other Node contributors.
|
||||
//
|
||||
// Permission is hereby granted, free of charge, to any person obtaining a
|
||||
// copy of this software and associated documentation files (the
|
||||
// "Software"), to deal in the Software without restriction, including
|
||||
// without limitation the rights to use, copy, modify, merge, publish,
|
||||
// distribute, sublicense, and/or sell copies of the Software, and to permit
|
||||
// persons to whom the Software is furnished to do so, subject to the
|
||||
// following conditions:
|
||||
//
|
||||
// The above copyright notice and this permission notice shall be included
|
||||
// in all copies or substantial portions of the Software.
|
||||
//
|
||||
// THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS
|
||||
// OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
|
||||
// MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN
|
||||
// NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM,
|
||||
// DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR
|
||||
// OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE
|
||||
// USE OR OTHER DEALINGS IN THE SOFTWARE.
|
||||
|
||||
module.exports = Readable;
|
||||
|
||||
/*<replacement>*/
|
||||
var isArray = require('isarray');
|
||||
/*</replacement>*/
|
||||
|
||||
|
||||
/*<replacement>*/
|
||||
var Buffer = require('buffer').Buffer;
|
||||
/*</replacement>*/
|
||||
|
||||
Readable.ReadableState = ReadableState;
|
||||
|
||||
var EE = require('events').EventEmitter;
|
||||
|
||||
/*<replacement>*/
|
||||
if (!EE.listenerCount) EE.listenerCount = function(emitter, type) {
|
||||
return emitter.listeners(type).length;
|
||||
};
|
||||
/*</replacement>*/
|
||||
|
||||
var Stream = require('stream');
|
||||
|
||||
/*<replacement>*/
|
||||
var util = require('core-util-is');
|
||||
util.inherits = require('inherits');
|
||||
/*</replacement>*/
|
||||
|
||||
var StringDecoder;
|
||||
|
||||
util.inherits(Readable, Stream);
|
||||
|
||||
function ReadableState(options, stream) {
|
||||
options = options || {};
|
||||
|
||||
// the point at which it stops calling _read() to fill the buffer
|
||||
// Note: 0 is a valid value, means "don't call _read preemptively ever"
|
||||
var hwm = options.highWaterMark;
|
||||
this.highWaterMark = (hwm || hwm === 0) ? hwm : 16 * 1024;
|
||||
|
||||
// cast to ints.
|
||||
this.highWaterMark = ~~this.highWaterMark;
|
||||
|
||||
this.buffer = [];
|
||||
this.length = 0;
|
||||
this.pipes = null;
|
||||
this.pipesCount = 0;
|
||||
this.flowing = false;
|
||||
this.ended = false;
|
||||
this.endEmitted = false;
|
||||
this.reading = false;
|
||||
|
||||
// In streams that never have any data, and do push(null) right away,
|
||||
// the consumer can miss the 'end' event if they do some I/O before
|
||||
// consuming the stream. So, we don't emit('end') until some reading
|
||||
// happens.
|
||||
this.calledRead = false;
|
||||
|
||||
// a flag to be able to tell if the onwrite cb is called immediately,
|
||||
// or on a later tick. We set this to true at first, becuase any
|
||||
// actions that shouldn't happen until "later" should generally also
|
||||
// not happen before the first write call.
|
||||
this.sync = true;
|
||||
|
||||
// whenever we return null, then we set a flag to say
|
||||
// that we're awaiting a 'readable' event emission.
|
||||
this.needReadable = false;
|
||||
this.emittedReadable = false;
|
||||
this.readableListening = false;
|
||||
|
||||
|
||||
// object stream flag. Used to make read(n) ignore n and to
|
||||
// make all the buffer merging and length checks go away
|
||||
this.objectMode = !!options.objectMode;
|
||||
|
||||
// Crypto is kind of old and crusty. Historically, its default string
|
||||
// encoding is 'binary' so we have to make this configurable.
|
||||
// Everything else in the universe uses 'utf8', though.
|
||||
this.defaultEncoding = options.defaultEncoding || 'utf8';
|
||||
|
||||
// when piping, we only care about 'readable' events that happen
|
||||
// after read()ing all the bytes and not getting any pushback.
|
||||
this.ranOut = false;
|
||||
|
||||
// the number of writers that are awaiting a drain event in .pipe()s
|
||||
this.awaitDrain = 0;
|
||||
|
||||
// if true, a maybeReadMore has been scheduled
|
||||
this.readingMore = false;
|
||||
|
||||
this.decoder = null;
|
||||
this.encoding = null;
|
||||
if (options.encoding) {
|
||||
if (!StringDecoder)
|
||||
StringDecoder = require('string_decoder/').StringDecoder;
|
||||
this.decoder = new StringDecoder(options.encoding);
|
||||
this.encoding = options.encoding;
|
||||
}
|
||||
}
|
||||
|
||||
function Readable(options) {
|
||||
if (!(this instanceof Readable))
|
||||
return new Readable(options);
|
||||
|
||||
this._readableState = new ReadableState(options, this);
|
||||
|
||||
// legacy
|
||||
this.readable = true;
|
||||
|
||||
Stream.call(this);
|
||||
}
|
||||
|
||||
// Manually shove something into the read() buffer.
|
||||
// This returns true if the highWaterMark has not been hit yet,
|
||||
// similar to how Writable.write() returns true if you should
|
||||
// write() some more.
|
||||
Readable.prototype.push = function(chunk, encoding) {
|
||||
var state = this._readableState;
|
||||
|
||||
if (typeof chunk === 'string' && !state.objectMode) {
|
||||
encoding = encoding || state.defaultEncoding;
|
||||
if (encoding !== state.encoding) {
|
||||
chunk = new Buffer(chunk, encoding);
|
||||
encoding = '';
|
||||
}
|
||||
}
|
||||
|
||||
return readableAddChunk(this, state, chunk, encoding, false);
|
||||
};
|
||||
|
||||
// Unshift should *always* be something directly out of read()
|
||||
Readable.prototype.unshift = function(chunk) {
|
||||
var state = this._readableState;
|
||||
return readableAddChunk(this, state, chunk, '', true);
|
||||
};
|
||||
|
||||
function readableAddChunk(stream, state, chunk, encoding, addToFront) {
|
||||
var er = chunkInvalid(state, chunk);
|
||||
if (er) {
|
||||
stream.emit('error', er);
|
||||
} else if (chunk === null || chunk === undefined) {
|
||||
state.reading = false;
|
||||
if (!state.ended)
|
||||
onEofChunk(stream, state);
|
||||
} else if (state.objectMode || chunk && chunk.length > 0) {
|
||||
if (state.ended && !addToFront) {
|
||||
var e = new Error('stream.push() after EOF');
|
||||
stream.emit('error', e);
|
||||
} else if (state.endEmitted && addToFront) {
|
||||
var e = new Error('stream.unshift() after end event');
|
||||
stream.emit('error', e);
|
||||
} else {
|
||||
if (state.decoder && !addToFront && !encoding)
|
||||
chunk = state.decoder.write(chunk);
|
||||
|
||||
// update the buffer info.
|
||||
state.length += state.objectMode ? 1 : chunk.length;
|
||||
if (addToFront) {
|
||||
state.buffer.unshift(chunk);
|
||||
} else {
|
||||
state.reading = false;
|
||||
state.buffer.push(chunk);
|
||||
}
|
||||
|
||||
if (state.needReadable)
|
||||
emitReadable(stream);
|
||||
|
||||
maybeReadMore(stream, state);
|
||||
}
|
||||
} else if (!addToFront) {
|
||||
state.reading = false;
|
||||
}
|
||||
|
||||
return needMoreData(state);
|
||||
}
|
||||
|
||||
|
||||
|
||||
// if it's past the high water mark, we can push in some more.
|
||||
// Also, if we have no data yet, we can stand some
|
||||
// more bytes. This is to work around cases where hwm=0,
|
||||
// such as the repl. Also, if the push() triggered a
|
||||
// readable event, and the user called read(largeNumber) such that
|
||||
// needReadable was set, then we ought to push more, so that another
|
||||
// 'readable' event will be triggered.
|
||||
function needMoreData(state) {
|
||||
return !state.ended &&
|
||||
(state.needReadable ||
|
||||
state.length < state.highWaterMark ||
|
||||
state.length === 0);
|
||||
}
|
||||
|
||||
// backwards compatibility.
|
||||
Readable.prototype.setEncoding = function(enc) {
|
||||
if (!StringDecoder)
|
||||
StringDecoder = require('string_decoder/').StringDecoder;
|
||||
this._readableState.decoder = new StringDecoder(enc);
|
||||
this._readableState.encoding = enc;
|
||||
};
|
||||
|
||||
// Don't raise the hwm > 128MB
|
||||
var MAX_HWM = 0x800000;
|
||||
function roundUpToNextPowerOf2(n) {
|
||||
if (n >= MAX_HWM) {
|
||||
n = MAX_HWM;
|
||||
} else {
|
||||
// Get the next highest power of 2
|
||||
n--;
|
||||
for (var p = 1; p < 32; p <<= 1) n |= n >> p;
|
||||
n++;
|
||||
}
|
||||
return n;
|
||||
}
|
||||
|
||||
function howMuchToRead(n, state) {
|
||||
if (state.length === 0 && state.ended)
|
||||
return 0;
|
||||
|
||||
if (state.objectMode)
|
||||
return n === 0 ? 0 : 1;
|
||||
|
||||
if (n === null || isNaN(n)) {
|
||||
// only flow one buffer at a time
|
||||
if (state.flowing && state.buffer.length)
|
||||
return state.buffer[0].length;
|
||||
else
|
||||
return state.length;
|
||||
}
|
||||
|
||||
if (n <= 0)
|
||||
return 0;
|
||||
|
||||
// If we're asking for more than the target buffer level,
|
||||
// then raise the water mark. Bump up to the next highest
|
||||
// power of 2, to prevent increasing it excessively in tiny
|
||||
// amounts.
|
||||
if (n > state.highWaterMark)
|
||||
state.highWaterMark = roundUpToNextPowerOf2(n);
|
||||
|
||||
// don't have that much. return null, unless we've ended.
|
||||
if (n > state.length) {
|
||||
if (!state.ended) {
|
||||
state.needReadable = true;
|
||||
return 0;
|
||||
} else
|
||||
return state.length;
|
||||
}
|
||||
|
||||
return n;
|
||||
}
|
||||
|
||||
// you can override either this method, or the async _read(n) below.
|
||||
Readable.prototype.read = function(n) {
|
||||
var state = this._readableState;
|
||||
state.calledRead = true;
|
||||
var nOrig = n;
|
||||
var ret;
|
||||
|
||||
if (typeof n !== 'number' || n > 0)
|
||||
state.emittedReadable = false;
|
||||
|
||||
// if we're doing read(0) to trigger a readable event, but we
|
||||
// already have a bunch of data in the buffer, then just trigger
|
||||
// the 'readable' event and move on.
|
||||
if (n === 0 &&
|
||||
state.needReadable &&
|
||||
(state.length >= state.highWaterMark || state.ended)) {
|
||||
emitReadable(this);
|
||||
return null;
|
||||
}
|
||||
|
||||
n = howMuchToRead(n, state);
|
||||
|
||||
// if we've ended, and we're now clear, then finish it up.
|
||||
if (n === 0 && state.ended) {
|
||||
ret = null;
|
||||
|
||||
// In cases where the decoder did not receive enough data
|
||||
// to produce a full chunk, then immediately received an
|
||||
// EOF, state.buffer will contain [<Buffer >, <Buffer 00 ...>].
|
||||
// howMuchToRead will see this and coerce the amount to
|
||||
// read to zero (because it's looking at the length of the
|
||||
// first <Buffer > in state.buffer), and we'll end up here.
|
||||
//
|
||||
// This can only happen via state.decoder -- no other venue
|
||||
// exists for pushing a zero-length chunk into state.buffer
|
||||
// and triggering this behavior. In this case, we return our
|
||||
// remaining data and end the stream, if appropriate.
|
||||
if (state.length > 0 && state.decoder) {
|
||||
ret = fromList(n, state);
|
||||
state.length -= ret.length;
|
||||
}
|
||||
|
||||
if (state.length === 0)
|
||||
endReadable(this);
|
||||
|
||||
return ret;
|
||||
}
|
||||
|
||||
// All the actual chunk generation logic needs to be
|
||||
// *below* the call to _read. The reason is that in certain
|
||||
// synthetic stream cases, such as passthrough streams, _read
|
||||
// may be a completely synchronous operation which may change
|
||||
// the state of the read buffer, providing enough data when
|
||||
// before there was *not* enough.
|
||||
//
|
||||
// So, the steps are:
|
||||
// 1. Figure out what the state of things will be after we do
|
||||
// a read from the buffer.
|
||||
//
|
||||
// 2. If that resulting state will trigger a _read, then call _read.
|
||||
// Note that this may be asynchronous, or synchronous. Yes, it is
|
||||
// deeply ugly to write APIs this way, but that still doesn't mean
|
||||
// that the Readable class should behave improperly, as streams are
|
||||
// designed to be sync/async agnostic.
|
||||
// Take note if the _read call is sync or async (ie, if the read call
|
||||
// has returned yet), so that we know whether or not it's safe to emit
|
||||
// 'readable' etc.
|
||||
//
|
||||
// 3. Actually pull the requested chunks out of the buffer and return.
|
||||
|
||||
// if we need a readable event, then we need to do some reading.
|
||||
var doRead = state.needReadable;
|
||||
|
||||
// if we currently have less than the highWaterMark, then also read some
|
||||
if (state.length - n <= state.highWaterMark)
|
||||
doRead = true;
|
||||
|
||||
// however, if we've ended, then there's no point, and if we're already
|
||||
// reading, then it's unnecessary.
|
||||
if (state.ended || state.reading)
|
||||
doRead = false;
|
||||
|
||||
if (doRead) {
|
||||
state.reading = true;
|
||||
state.sync = true;
|
||||
// if the length is currently zero, then we *need* a readable event.
|
||||
if (state.length === 0)
|
||||
state.needReadable = true;
|
||||
// call internal read method
|
||||
this._read(state.highWaterMark);
|
||||
state.sync = false;
|
||||
}
|
||||
|
||||
// If _read called its callback synchronously, then `reading`
|
||||
// will be false, and we need to re-evaluate how much data we
|
||||
// can return to the user.
|
||||
if (doRead && !state.reading)
|
||||
n = howMuchToRead(nOrig, state);
|
||||
|
||||
if (n > 0)
|
||||
ret = fromList(n, state);
|
||||
else
|
||||
ret = null;
|
||||
|
||||
if (ret === null) {
|
||||
state.needReadable = true;
|
||||
n = 0;
|
||||
}
|
||||
|
||||
state.length -= n;
|
||||
|
||||
// If we have nothing in the buffer, then we want to know
|
||||
// as soon as we *do* get something into the buffer.
|
||||
if (state.length === 0 && !state.ended)
|
||||
state.needReadable = true;
|
||||
|
||||
// If we happened to read() exactly the remaining amount in the
|
||||
// buffer, and the EOF has been seen at this point, then make sure
|
||||
// that we emit 'end' on the very next tick.
|
||||
if (state.ended && !state.endEmitted && state.length === 0)
|
||||
endReadable(this);
|
||||
|
||||
return ret;
|
||||
};
|
||||
|
||||
function chunkInvalid(state, chunk) {
|
||||
var er = null;
|
||||
if (!Buffer.isBuffer(chunk) &&
|
||||
'string' !== typeof chunk &&
|
||||
chunk !== null &&
|
||||
chunk !== undefined &&
|
||||
!state.objectMode) {
|
||||
er = new TypeError('Invalid non-string/buffer chunk');
|
||||
}
|
||||
return er;
|
||||
}
|
||||
|
||||
|
||||
function onEofChunk(stream, state) {
|
||||
if (state.decoder && !state.ended) {
|
||||
var chunk = state.decoder.end();
|
||||
if (chunk && chunk.length) {
|
||||
state.buffer.push(chunk);
|
||||
state.length += state.objectMode ? 1 : chunk.length;
|
||||
}
|
||||
}
|
||||
state.ended = true;
|
||||
|
||||
// if we've ended and we have some data left, then emit
|
||||
// 'readable' now to make sure it gets picked up.
|
||||
if (state.length > 0)
|
||||
emitReadable(stream);
|
||||
else
|
||||
endReadable(stream);
|
||||
}
|
||||
|
||||
// Don't emit readable right away in sync mode, because this can trigger
|
||||
// another read() call => stack overflow. This way, it might trigger
|
||||
// a nextTick recursion warning, but that's not so bad.
|
||||
function emitReadable(stream) {
|
||||
var state = stream._readableState;
|
||||
state.needReadable = false;
|
||||
if (state.emittedReadable)
|
||||
return;
|
||||
|
||||
state.emittedReadable = true;
|
||||
if (state.sync)
|
||||
process.nextTick(function() {
|
||||
emitReadable_(stream);
|
||||
});
|
||||
else
|
||||
emitReadable_(stream);
|
||||
}
|
||||
|
||||
function emitReadable_(stream) {
|
||||
stream.emit('readable');
|
||||
}
|
||||
|
||||
|
||||
// at this point, the user has presumably seen the 'readable' event,
|
||||
// and called read() to consume some data. that may have triggered
|
||||
// in turn another _read(n) call, in which case reading = true if
|
||||
// it's in progress.
|
||||
// However, if we're not ended, or reading, and the length < hwm,
|
||||
// then go ahead and try to read some more preemptively.
|
||||
function maybeReadMore(stream, state) {
|
||||
if (!state.readingMore) {
|
||||
state.readingMore = true;
|
||||
process.nextTick(function() {
|
||||
maybeReadMore_(stream, state);
|
||||
});
|
||||
}
|
||||
}
|
||||
|
||||
function maybeReadMore_(stream, state) {
|
||||
var len = state.length;
|
||||
while (!state.reading && !state.flowing && !state.ended &&
|
||||
state.length < state.highWaterMark) {
|
||||
stream.read(0);
|
||||
if (len === state.length)
|
||||
// didn't get any data, stop spinning.
|
||||
break;
|
||||
else
|
||||
len = state.length;
|
||||
}
|
||||
state.readingMore = false;
|
||||
}
|
||||
|
||||
// abstract method. to be overridden in specific implementation classes.
|
||||
// call cb(er, data) where data is <= n in length.
|
||||
// for virtual (non-string, non-buffer) streams, "length" is somewhat
|
||||
// arbitrary, and perhaps not very meaningful.
|
||||
Readable.prototype._read = function(n) {
|
||||
this.emit('error', new Error('not implemented'));
|
||||
};
|
||||
|
||||
Readable.prototype.pipe = function(dest, pipeOpts) {
|
||||
var src = this;
|
||||
var state = this._readableState;
|
||||
|
||||
switch (state.pipesCount) {
|
||||
case 0:
|
||||
state.pipes = dest;
|
||||
break;
|
||||
case 1:
|
||||
state.pipes = [state.pipes, dest];
|
||||
break;
|
||||
default:
|
||||
state.pipes.push(dest);
|
||||
break;
|
||||
}
|
||||
state.pipesCount += 1;
|
||||
|
||||
var doEnd = (!pipeOpts || pipeOpts.end !== false) &&
|
||||
dest !== process.stdout &&
|
||||
dest !== process.stderr;
|
||||
|
||||
var endFn = doEnd ? onend : cleanup;
|
||||
if (state.endEmitted)
|
||||
process.nextTick(endFn);
|
||||
else
|
||||
src.once('end', endFn);
|
||||
|
||||
dest.on('unpipe', onunpipe);
|
||||
function onunpipe(readable) {
|
||||
if (readable !== src) return;
|
||||
cleanup();
|
||||
}
|
||||
|
||||
function onend() {
|
||||
dest.end();
|
||||
}
|
||||
|
||||
// when the dest drains, it reduces the awaitDrain counter
|
||||
// on the source. This would be more elegant with a .once()
|
||||
// handler in flow(), but adding and removing repeatedly is
|
||||
// too slow.
|
||||
var ondrain = pipeOnDrain(src);
|
||||
dest.on('drain', ondrain);
|
||||
|
||||
function cleanup() {
|
||||
// cleanup event handlers once the pipe is broken
|
||||
dest.removeListener('close', onclose);
|
||||
dest.removeListener('finish', onfinish);
|
||||
dest.removeListener('drain', ondrain);
|
||||
dest.removeListener('error', onerror);
|
||||
dest.removeListener('unpipe', onunpipe);
|
||||
src.removeListener('end', onend);
|
||||
src.removeListener('end', cleanup);
|
||||
|
||||
// if the reader is waiting for a drain event from this
|
||||
// specific writer, then it would cause it to never start
|
||||
// flowing again.
|
||||
// So, if this is awaiting a drain, then we just call it now.
|
||||
// If we don't know, then assume that we are waiting for one.
|
||||
if (!dest._writableState || dest._writableState.needDrain)
|
||||
ondrain();
|
||||
}
|
||||
|
||||
// if the dest has an error, then stop piping into it.
|
||||
// however, don't suppress the throwing behavior for this.
|
||||
function onerror(er) {
|
||||
unpipe();
|
||||
dest.removeListener('error', onerror);
|
||||
if (EE.listenerCount(dest, 'error') === 0)
|
||||
dest.emit('error', er);
|
||||
}
|
||||
// This is a brutally ugly hack to make sure that our error handler
|
||||
// is attached before any userland ones. NEVER DO THIS.
|
||||
if (!dest._events || !dest._events.error)
|
||||
dest.on('error', onerror);
|
||||
else if (isArray(dest._events.error))
|
||||
dest._events.error.unshift(onerror);
|
||||
else
|
||||
dest._events.error = [onerror, dest._events.error];
|
||||
|
||||
|
||||
|
||||
// Both close and finish should trigger unpipe, but only once.
|
||||
function onclose() {
|
||||
dest.removeListener('finish', onfinish);
|
||||
unpipe();
|
||||
}
|
||||
dest.once('close', onclose);
|
||||
function onfinish() {
|
||||
dest.removeListener('close', onclose);
|
||||
unpipe();
|
||||
}
|
||||
dest.once('finish', onfinish);
|
||||
|
||||
function unpipe() {
|
||||
src.unpipe(dest);
|
||||
}
|
||||
|
||||
// tell the dest that it's being piped to
|
||||
dest.emit('pipe', src);
|
||||
|
||||
// start the flow if it hasn't been started already.
|
||||
if (!state.flowing) {
|
||||
// the handler that waits for readable events after all
|
||||
// the data gets sucked out in flow.
|
||||
// This would be easier to follow with a .once() handler
|
||||
// in flow(), but that is too slow.
|
||||
this.on('readable', pipeOnReadable);
|
||||
|
||||
state.flowing = true;
|
||||
process.nextTick(function() {
|
||||
flow(src);
|
||||
});
|
||||
}
|
||||
|
||||
return dest;
|
||||
};
|
||||
|
||||
function pipeOnDrain(src) {
|
||||
return function() {
|
||||
var dest = this;
|
||||
var state = src._readableState;
|
||||
state.awaitDrain--;
|
||||
if (state.awaitDrain === 0)
|
||||
flow(src);
|
||||
};
|
||||
}
|
||||
|
||||
function flow(src) {
|
||||
var state = src._readableState;
|
||||
var chunk;
|
||||
state.awaitDrain = 0;
|
||||
|
||||
function write(dest, i, list) {
|
||||
var written = dest.write(chunk);
|
||||
if (false === written) {
|
||||
state.awaitDrain++;
|
||||
}
|
||||
}
|
||||
|
||||
while (state.pipesCount && null !== (chunk = src.read())) {
|
||||
|
||||
if (state.pipesCount === 1)
|
||||
write(state.pipes, 0, null);
|
||||
else
|
||||
forEach(state.pipes, write);
|
||||
|
||||
src.emit('data', chunk);
|
||||
|
||||
// if anyone needs a drain, then we have to wait for that.
|
||||
if (state.awaitDrain > 0)
|
||||
return;
|
||||
}
|
||||
|
||||
// if every destination was unpiped, either before entering this
|
||||
// function, or in the while loop, then stop flowing.
|
||||
//
|
||||
// NB: This is a pretty rare edge case.
|
||||
if (state.pipesCount === 0) {
|
||||
state.flowing = false;
|
||||
|
||||
// if there were data event listeners added, then switch to old mode.
|
||||
if (EE.listenerCount(src, 'data') > 0)
|
||||
emitDataEvents(src);
|
||||
return;
|
||||
}
|
||||
|
||||
// at this point, no one needed a drain, so we just ran out of data
|
||||
// on the next readable event, start it over again.
|
||||
state.ranOut = true;
|
||||
}
|
||||
|
||||
function pipeOnReadable() {
|
||||
if (this._readableState.ranOut) {
|
||||
this._readableState.ranOut = false;
|
||||
flow(this);
|
||||
}
|
||||
}
|
||||
|
||||
|
||||
Readable.prototype.unpipe = function(dest) {
|
||||
var state = this._readableState;
|
||||
|
||||
// if we're not piping anywhere, then do nothing.
|
||||
if (state.pipesCount === 0)
|
||||
return this;
|
||||
|
||||
// just one destination. most common case.
|
||||
if (state.pipesCount === 1) {
|
||||
// passed in one, but it's not the right one.
|
||||
if (dest && dest !== state.pipes)
|
||||
return this;
|
||||
|
||||
if (!dest)
|
||||
dest = state.pipes;
|
||||
|
||||
// got a match.
|
||||
state.pipes = null;
|
||||
state.pipesCount = 0;
|
||||
this.removeListener('readable', pipeOnReadable);
|
||||
state.flowing = false;
|
||||
if (dest)
|
||||
dest.emit('unpipe', this);
|
||||
return this;
|
||||
}
|
||||
|
||||
// slow case. multiple pipe destinations.
|
||||
|
||||
if (!dest) {
|
||||
// remove all.
|
||||
var dests = state.pipes;
|
||||
var len = state.pipesCount;
|
||||
state.pipes = null;
|
||||
state.pipesCount = 0;
|
||||
this.removeListener('readable', pipeOnReadable);
|
||||
state.flowing = false;
|
||||
|
||||
for (var i = 0; i < len; i++)
|
||||
dests[i].emit('unpipe', this);
|
||||
return this;
|
||||
}
|
||||
|
||||
// try to find the right one.
|
||||
var i = indexOf(state.pipes, dest);
|
||||
if (i === -1)
|
||||
return this;
|
||||
|
||||
state.pipes.splice(i, 1);
|
||||
state.pipesCount -= 1;
|
||||
if (state.pipesCount === 1)
|
||||
state.pipes = state.pipes[0];
|
||||
|
||||
dest.emit('unpipe', this);
|
||||
|
||||
return this;
|
||||
};
|
||||
|
||||
// set up data events if they are asked for
|
||||
// Ensure readable listeners eventually get something
|
||||
Readable.prototype.on = function(ev, fn) {
|
||||
var res = Stream.prototype.on.call(this, ev, fn);
|
||||
|
||||
if (ev === 'data' && !this._readableState.flowing)
|
||||
emitDataEvents(this);
|
||||
|
||||
if (ev === 'readable' && this.readable) {
|
||||
var state = this._readableState;
|
||||
if (!state.readableListening) {
|
||||
state.readableListening = true;
|
||||
state.emittedReadable = false;
|
||||
state.needReadable = true;
|
||||
if (!state.reading) {
|
||||
this.read(0);
|
||||
} else if (state.length) {
|
||||
emitReadable(this, state);
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
return res;
|
||||
};
|
||||
Readable.prototype.addListener = Readable.prototype.on;
|
||||
|
||||
// pause() and resume() are remnants of the legacy readable stream API
|
||||
// If the user uses them, then switch into old mode.
|
||||
Readable.prototype.resume = function() {
|
||||
emitDataEvents(this);
|
||||
this.read(0);
|
||||
this.emit('resume');
|
||||
};
|
||||
|
||||
Readable.prototype.pause = function() {
|
||||
emitDataEvents(this, true);
|
||||
this.emit('pause');
|
||||
};
|
||||
|
||||
function emitDataEvents(stream, startPaused) {
|
||||
var state = stream._readableState;
|
||||
|
||||
if (state.flowing) {
|
||||
// https://github.com/isaacs/readable-stream/issues/16
|
||||
throw new Error('Cannot switch to old mode now.');
|
||||
}
|
||||
|
||||
var paused = startPaused || false;
|
||||
var readable = false;
|
||||
|
||||
// convert to an old-style stream.
|
||||
stream.readable = true;
|
||||
stream.pipe = Stream.prototype.pipe;
|
||||
stream.on = stream.addListener = Stream.prototype.on;
|
||||
|
||||
stream.on('readable', function() {
|
||||
readable = true;
|
||||
|
||||
var c;
|
||||
while (!paused && (null !== (c = stream.read())))
|
||||
stream.emit('data', c);
|
||||
|
||||
if (c === null) {
|
||||
readable = false;
|
||||
stream._readableState.needReadable = true;
|
||||
}
|
||||
});
|
||||
|
||||
stream.pause = function() {
|
||||
paused = true;
|
||||
this.emit('pause');
|
||||
};
|
||||
|
||||
stream.resume = function() {
|
||||
paused = false;
|
||||
if (readable)
|
||||
process.nextTick(function() {
|
||||
stream.emit('readable');
|
||||
});
|
||||
else
|
||||
this.read(0);
|
||||
this.emit('resume');
|
||||
};
|
||||
|
||||
// now make it start, just in case it hadn't already.
|
||||
stream.emit('readable');
|
||||
}
|
||||
|
||||
// wrap an old-style stream as the async data source.
|
||||
// This is *not* part of the readable stream interface.
|
||||
// It is an ugly unfortunate mess of history.
|
||||
Readable.prototype.wrap = function(stream) {
|
||||
var state = this._readableState;
|
||||
var paused = false;
|
||||
|
||||
var self = this;
|
||||
stream.on('end', function() {
|
||||
if (state.decoder && !state.ended) {
|
||||
var chunk = state.decoder.end();
|
||||
if (chunk && chunk.length)
|
||||
self.push(chunk);
|
||||
}
|
||||
|
||||
self.push(null);
|
||||
});
|
||||
|
||||
stream.on('data', function(chunk) {
|
||||
if (state.decoder)
|
||||
chunk = state.decoder.write(chunk);
|
||||
|
||||
// don't skip over falsy values in objectMode
|
||||
//if (state.objectMode && util.isNullOrUndefined(chunk))
|
||||
if (state.objectMode && (chunk === null || chunk === undefined))
|
||||
return;
|
||||
else if (!state.objectMode && (!chunk || !chunk.length))
|
||||
return;
|
||||
|
||||
var ret = self.push(chunk);
|
||||
if (!ret) {
|
||||
paused = true;
|
||||
stream.pause();
|
||||
}
|
||||
});
|
||||
|
||||
// proxy all the other methods.
|
||||
// important when wrapping filters and duplexes.
|
||||
for (var i in stream) {
|
||||
if (typeof stream[i] === 'function' &&
|
||||
typeof this[i] === 'undefined') {
|
||||
this[i] = function(method) { return function() {
|
||||
return stream[method].apply(stream, arguments);
|
||||
}}(i);
|
||||
}
|
||||
}
|
||||
|
||||
// proxy certain important events.
|
||||
var events = ['error', 'close', 'destroy', 'pause', 'resume'];
|
||||
forEach(events, function(ev) {
|
||||
stream.on(ev, self.emit.bind(self, ev));
|
||||
});
|
||||
|
||||
// when we try to consume some more bytes, simply unpause the
|
||||
// underlying stream.
|
||||
self._read = function(n) {
|
||||
if (paused) {
|
||||
paused = false;
|
||||
stream.resume();
|
||||
}
|
||||
};
|
||||
|
||||
return self;
|
||||
};
|
||||
|
||||
|
||||
|
||||
// exposed for testing purposes only.
|
||||
Readable._fromList = fromList;
|
||||
|
||||
// Pluck off n bytes from an array of buffers.
|
||||
// Length is the combined lengths of all the buffers in the list.
|
||||
function fromList(n, state) {
|
||||
var list = state.buffer;
|
||||
var length = state.length;
|
||||
var stringMode = !!state.decoder;
|
||||
var objectMode = !!state.objectMode;
|
||||
var ret;
|
||||
|
||||
// nothing in the list, definitely empty.
|
||||
if (list.length === 0)
|
||||
return null;
|
||||
|
||||
if (length === 0)
|
||||
ret = null;
|
||||
else if (objectMode)
|
||||
ret = list.shift();
|
||||
else if (!n || n >= length) {
|
||||
// read it all, truncate the array.
|
||||
if (stringMode)
|
||||
ret = list.join('');
|
||||
else
|
||||
ret = Buffer.concat(list, length);
|
||||
list.length = 0;
|
||||
} else {
|
||||
// read just some of it.
|
||||
if (n < list[0].length) {
|
||||
// just take a part of the first list item.
|
||||
// slice is the same for buffers and strings.
|
||||
var buf = list[0];
|
||||
ret = buf.slice(0, n);
|
||||
list[0] = buf.slice(n);
|
||||
} else if (n === list[0].length) {
|
||||
// first list is a perfect match
|
||||
ret = list.shift();
|
||||
} else {
|
||||
// complex case.
|
||||
// we have enough to cover it, but it spans past the first buffer.
|
||||
if (stringMode)
|
||||
ret = '';
|
||||
else
|
||||
ret = new Buffer(n);
|
||||
|
||||
var c = 0;
|
||||
for (var i = 0, l = list.length; i < l && c < n; i++) {
|
||||
var buf = list[0];
|
||||
var cpy = Math.min(n - c, buf.length);
|
||||
|
||||
if (stringMode)
|
||||
ret += buf.slice(0, cpy);
|
||||
else
|
||||
buf.copy(ret, c, 0, cpy);
|
||||
|
||||
if (cpy < buf.length)
|
||||
list[0] = buf.slice(cpy);
|
||||
else
|
||||
list.shift();
|
||||
|
||||
c += cpy;
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
return ret;
|
||||
}
|
||||
|
||||
function endReadable(stream) {
|
||||
var state = stream._readableState;
|
||||
|
||||
// If we get here before consuming all the bytes, then that is a
|
||||
// bug in node. Should never happen.
|
||||
if (state.length > 0)
|
||||
throw new Error('endReadable called on non-empty stream');
|
||||
|
||||
if (!state.endEmitted && state.calledRead) {
|
||||
state.ended = true;
|
||||
process.nextTick(function() {
|
||||
// Check that we didn't get one last unshift.
|
||||
if (!state.endEmitted && state.length === 0) {
|
||||
state.endEmitted = true;
|
||||
stream.readable = false;
|
||||
stream.emit('end');
|
||||
}
|
||||
});
|
||||
}
|
||||
}
|
||||
|
||||
function forEach (xs, f) {
|
||||
for (var i = 0, l = xs.length; i < l; i++) {
|
||||
f(xs[i], i);
|
||||
}
|
||||
}
|
||||
|
||||
function indexOf (xs, x) {
|
||||
for (var i = 0, l = xs.length; i < l; i++) {
|
||||
if (xs[i] === x) return i;
|
||||
}
|
||||
return -1;
|
||||
}
|
||||
210
node_modules/mongodb/node_modules/readable-stream/lib/_stream_transform.js
generated
vendored
Normal file
210
node_modules/mongodb/node_modules/readable-stream/lib/_stream_transform.js
generated
vendored
Normal file
@@ -0,0 +1,210 @@
|
||||
// Copyright Joyent, Inc. and other Node contributors.
|
||||
//
|
||||
// Permission is hereby granted, free of charge, to any person obtaining a
|
||||
// copy of this software and associated documentation files (the
|
||||
// "Software"), to deal in the Software without restriction, including
|
||||
// without limitation the rights to use, copy, modify, merge, publish,
|
||||
// distribute, sublicense, and/or sell copies of the Software, and to permit
|
||||
// persons to whom the Software is furnished to do so, subject to the
|
||||
// following conditions:
|
||||
//
|
||||
// The above copyright notice and this permission notice shall be included
|
||||
// in all copies or substantial portions of the Software.
|
||||
//
|
||||
// THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS
|
||||
// OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
|
||||
// MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN
|
||||
// NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM,
|
||||
// DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR
|
||||
// OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE
|
||||
// USE OR OTHER DEALINGS IN THE SOFTWARE.
|
||||
|
||||
|
||||
// a transform stream is a readable/writable stream where you do
|
||||
// something with the data. Sometimes it's called a "filter",
|
||||
// but that's not a great name for it, since that implies a thing where
|
||||
// some bits pass through, and others are simply ignored. (That would
|
||||
// be a valid example of a transform, of course.)
|
||||
//
|
||||
// While the output is causally related to the input, it's not a
|
||||
// necessarily symmetric or synchronous transformation. For example,
|
||||
// a zlib stream might take multiple plain-text writes(), and then
|
||||
// emit a single compressed chunk some time in the future.
|
||||
//
|
||||
// Here's how this works:
|
||||
//
|
||||
// The Transform stream has all the aspects of the readable and writable
|
||||
// stream classes. When you write(chunk), that calls _write(chunk,cb)
|
||||
// internally, and returns false if there's a lot of pending writes
|
||||
// buffered up. When you call read(), that calls _read(n) until
|
||||
// there's enough pending readable data buffered up.
|
||||
//
|
||||
// In a transform stream, the written data is placed in a buffer. When
|
||||
// _read(n) is called, it transforms the queued up data, calling the
|
||||
// buffered _write cb's as it consumes chunks. If consuming a single
|
||||
// written chunk would result in multiple output chunks, then the first
|
||||
// outputted bit calls the readcb, and subsequent chunks just go into
|
||||
// the read buffer, and will cause it to emit 'readable' if necessary.
|
||||
//
|
||||
// This way, back-pressure is actually determined by the reading side,
|
||||
// since _read has to be called to start processing a new chunk. However,
|
||||
// a pathological inflate type of transform can cause excessive buffering
|
||||
// here. For example, imagine a stream where every byte of input is
|
||||
// interpreted as an integer from 0-255, and then results in that many
|
||||
// bytes of output. Writing the 4 bytes {ff,ff,ff,ff} would result in
|
||||
// 1kb of data being output. In this case, you could write a very small
|
||||
// amount of input, and end up with a very large amount of output. In
|
||||
// such a pathological inflating mechanism, there'd be no way to tell
|
||||
// the system to stop doing the transform. A single 4MB write could
|
||||
// cause the system to run out of memory.
|
||||
//
|
||||
// However, even in such a pathological case, only a single written chunk
|
||||
// would be consumed, and then the rest would wait (un-transformed) until
|
||||
// the results of the previous transformed chunk were consumed.
|
||||
|
||||
module.exports = Transform;
|
||||
|
||||
var Duplex = require('./_stream_duplex');
|
||||
|
||||
/*<replacement>*/
|
||||
var util = require('core-util-is');
|
||||
util.inherits = require('inherits');
|
||||
/*</replacement>*/
|
||||
|
||||
util.inherits(Transform, Duplex);
|
||||
|
||||
|
||||
function TransformState(options, stream) {
|
||||
this.afterTransform = function(er, data) {
|
||||
return afterTransform(stream, er, data);
|
||||
};
|
||||
|
||||
this.needTransform = false;
|
||||
this.transforming = false;
|
||||
this.writecb = null;
|
||||
this.writechunk = null;
|
||||
}
|
||||
|
||||
function afterTransform(stream, er, data) {
|
||||
var ts = stream._transformState;
|
||||
ts.transforming = false;
|
||||
|
||||
var cb = ts.writecb;
|
||||
|
||||
if (!cb)
|
||||
return stream.emit('error', new Error('no writecb in Transform class'));
|
||||
|
||||
ts.writechunk = null;
|
||||
ts.writecb = null;
|
||||
|
||||
if (data !== null && data !== undefined)
|
||||
stream.push(data);
|
||||
|
||||
if (cb)
|
||||
cb(er);
|
||||
|
||||
var rs = stream._readableState;
|
||||
rs.reading = false;
|
||||
if (rs.needReadable || rs.length < rs.highWaterMark) {
|
||||
stream._read(rs.highWaterMark);
|
||||
}
|
||||
}
|
||||
|
||||
|
||||
function Transform(options) {
|
||||
if (!(this instanceof Transform))
|
||||
return new Transform(options);
|
||||
|
||||
Duplex.call(this, options);
|
||||
|
||||
var ts = this._transformState = new TransformState(options, this);
|
||||
|
||||
// when the writable side finishes, then flush out anything remaining.
|
||||
var stream = this;
|
||||
|
||||
// start out asking for a readable event once data is transformed.
|
||||
this._readableState.needReadable = true;
|
||||
|
||||
// we have implemented the _read method, and done the other things
|
||||
// that Readable wants before the first _read call, so unset the
|
||||
// sync guard flag.
|
||||
this._readableState.sync = false;
|
||||
|
||||
this.once('finish', function() {
|
||||
if ('function' === typeof this._flush)
|
||||
this._flush(function(er) {
|
||||
done(stream, er);
|
||||
});
|
||||
else
|
||||
done(stream);
|
||||
});
|
||||
}
|
||||
|
||||
Transform.prototype.push = function(chunk, encoding) {
|
||||
this._transformState.needTransform = false;
|
||||
return Duplex.prototype.push.call(this, chunk, encoding);
|
||||
};
|
||||
|
||||
// This is the part where you do stuff!
|
||||
// override this function in implementation classes.
|
||||
// 'chunk' is an input chunk.
|
||||
//
|
||||
// Call `push(newChunk)` to pass along transformed output
|
||||
// to the readable side. You may call 'push' zero or more times.
|
||||
//
|
||||
// Call `cb(err)` when you are done with this chunk. If you pass
|
||||
// an error, then that'll put the hurt on the whole operation. If you
|
||||
// never call cb(), then you'll never get another chunk.
|
||||
Transform.prototype._transform = function(chunk, encoding, cb) {
|
||||
throw new Error('not implemented');
|
||||
};
|
||||
|
||||
Transform.prototype._write = function(chunk, encoding, cb) {
|
||||
var ts = this._transformState;
|
||||
ts.writecb = cb;
|
||||
ts.writechunk = chunk;
|
||||
ts.writeencoding = encoding;
|
||||
if (!ts.transforming) {
|
||||
var rs = this._readableState;
|
||||
if (ts.needTransform ||
|
||||
rs.needReadable ||
|
||||
rs.length < rs.highWaterMark)
|
||||
this._read(rs.highWaterMark);
|
||||
}
|
||||
};
|
||||
|
||||
// Doesn't matter what the args are here.
|
||||
// _transform does all the work.
|
||||
// That we got here means that the readable side wants more data.
|
||||
Transform.prototype._read = function(n) {
|
||||
var ts = this._transformState;
|
||||
|
||||
if (ts.writechunk !== null && ts.writecb && !ts.transforming) {
|
||||
ts.transforming = true;
|
||||
this._transform(ts.writechunk, ts.writeencoding, ts.afterTransform);
|
||||
} else {
|
||||
// mark that we need a transform, so that any data that comes in
|
||||
// will get processed, now that we've asked for it.
|
||||
ts.needTransform = true;
|
||||
}
|
||||
};
|
||||
|
||||
|
||||
function done(stream, er) {
|
||||
if (er)
|
||||
return stream.emit('error', er);
|
||||
|
||||
// if there's nothing in the write buffer, then that means
|
||||
// that nothing more will ever be provided
|
||||
var ws = stream._writableState;
|
||||
var rs = stream._readableState;
|
||||
var ts = stream._transformState;
|
||||
|
||||
if (ws.length)
|
||||
throw new Error('calling transform done when ws.length != 0');
|
||||
|
||||
if (ts.transforming)
|
||||
throw new Error('calling transform done when still transforming');
|
||||
|
||||
return stream.push(null);
|
||||
}
|
||||
386
node_modules/mongodb/node_modules/readable-stream/lib/_stream_writable.js
generated
vendored
Normal file
386
node_modules/mongodb/node_modules/readable-stream/lib/_stream_writable.js
generated
vendored
Normal file
@@ -0,0 +1,386 @@
|
||||
// Copyright Joyent, Inc. and other Node contributors.
|
||||
//
|
||||
// Permission is hereby granted, free of charge, to any person obtaining a
|
||||
// copy of this software and associated documentation files (the
|
||||
// "Software"), to deal in the Software without restriction, including
|
||||
// without limitation the rights to use, copy, modify, merge, publish,
|
||||
// distribute, sublicense, and/or sell copies of the Software, and to permit
|
||||
// persons to whom the Software is furnished to do so, subject to the
|
||||
// following conditions:
|
||||
//
|
||||
// The above copyright notice and this permission notice shall be included
|
||||
// in all copies or substantial portions of the Software.
|
||||
//
|
||||
// THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS
|
||||
// OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF
|
||||
// MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN
|
||||
// NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM,
|
||||
// DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR
|
||||
// OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE
|
||||
// USE OR OTHER DEALINGS IN THE SOFTWARE.
|
||||
|
||||
// A bit simpler than readable streams.
|
||||
// Implement an async ._write(chunk, cb), and it'll handle all
|
||||
// the drain event emission and buffering.
|
||||
|
||||
module.exports = Writable;
|
||||
|
||||
/*<replacement>*/
|
||||
var Buffer = require('buffer').Buffer;
|
||||
/*</replacement>*/
|
||||
|
||||
Writable.WritableState = WritableState;
|
||||
|
||||
|
||||
/*<replacement>*/
|
||||
var util = require('core-util-is');
|
||||
util.inherits = require('inherits');
|
||||
/*</replacement>*/
|
||||
|
||||
var Stream = require('stream');
|
||||
|
||||
util.inherits(Writable, Stream);
|
||||
|
||||
function WriteReq(chunk, encoding, cb) {
|
||||
this.chunk = chunk;
|
||||
this.encoding = encoding;
|
||||
this.callback = cb;
|
||||
}
|
||||
|
||||
function WritableState(options, stream) {
|
||||
options = options || {};
|
||||
|
||||
// the point at which write() starts returning false
|
||||
// Note: 0 is a valid value, means that we always return false if
|
||||
// the entire buffer is not flushed immediately on write()
|
||||
var hwm = options.highWaterMark;
|
||||
this.highWaterMark = (hwm || hwm === 0) ? hwm : 16 * 1024;
|
||||
|
||||
// object stream flag to indicate whether or not this stream
|
||||
// contains buffers or objects.
|
||||
this.objectMode = !!options.objectMode;
|
||||
|
||||
// cast to ints.
|
||||
this.highWaterMark = ~~this.highWaterMark;
|
||||
|
||||
this.needDrain = false;
|
||||
// at the start of calling end()
|
||||
this.ending = false;
|
||||
// when end() has been called, and returned
|
||||
this.ended = false;
|
||||
// when 'finish' is emitted
|
||||
this.finished = false;
|
||||
|
||||
// should we decode strings into buffers before passing to _write?
|
||||
// this is here so that some node-core streams can optimize string
|
||||
// handling at a lower level.
|
||||
var noDecode = options.decodeStrings === false;
|
||||
this.decodeStrings = !noDecode;
|
||||
|
||||
// Crypto is kind of old and crusty. Historically, its default string
|
||||
// encoding is 'binary' so we have to make this configurable.
|
||||
// Everything else in the universe uses 'utf8', though.
|
||||
this.defaultEncoding = options.defaultEncoding || 'utf8';
|
||||
|
||||
// not an actual buffer we keep track of, but a measurement
|
||||
// of how much we're waiting to get pushed to some underlying
|
||||
// socket or file.
|
||||
this.length = 0;
|
||||
|
||||
// a flag to see when we're in the middle of a write.
|
||||
this.writing = false;
|
||||
|
||||
// a flag to be able to tell if the onwrite cb is called immediately,
|
||||
// or on a later tick. We set this to true at first, becuase any
|
||||
// actions that shouldn't happen until "later" should generally also
|
||||
// not happen before the first write call.
|
||||
this.sync = true;
|
||||
|
||||
// a flag to know if we're processing previously buffered items, which
|
||||
// may call the _write() callback in the same tick, so that we don't
|
||||
// end up in an overlapped onwrite situation.
|
||||
this.bufferProcessing = false;
|
||||
|
||||
// the callback that's passed to _write(chunk,cb)
|
||||
this.onwrite = function(er) {
|
||||
onwrite(stream, er);
|
||||
};
|
||||
|
||||
// the callback that the user supplies to write(chunk,encoding,cb)
|
||||
this.writecb = null;
|
||||
|
||||
// the amount that is being written when _write is called.
|
||||
this.writelen = 0;
|
||||
|
||||
this.buffer = [];
|
||||
|
||||
// True if the error was already emitted and should not be thrown again
|
||||
this.errorEmitted = false;
|
||||
}
|
||||
|
||||
function Writable(options) {
|
||||
var Duplex = require('./_stream_duplex');
|
||||
|
||||
// Writable ctor is applied to Duplexes, though they're not
|
||||
// instanceof Writable, they're instanceof Readable.
|
||||
if (!(this instanceof Writable) && !(this instanceof Duplex))
|
||||
return new Writable(options);
|
||||
|
||||
this._writableState = new WritableState(options, this);
|
||||
|
||||
// legacy.
|
||||
this.writable = true;
|
||||
|
||||
Stream.call(this);
|
||||
}
|
||||
|
||||
// Otherwise people can pipe Writable streams, which is just wrong.
|
||||
Writable.prototype.pipe = function() {
|
||||
this.emit('error', new Error('Cannot pipe. Not readable.'));
|
||||
};
|
||||
|
||||
|
||||
function writeAfterEnd(stream, state, cb) {
|
||||
var er = new Error('write after end');
|
||||
// TODO: defer error events consistently everywhere, not just the cb
|
||||
stream.emit('error', er);
|
||||
process.nextTick(function() {
|
||||
cb(er);
|
||||
});
|
||||
}
|
||||
|
||||
// If we get something that is not a buffer, string, null, or undefined,
|
||||
// and we're not in objectMode, then that's an error.
|
||||
// Otherwise stream chunks are all considered to be of length=1, and the
|
||||
// watermarks determine how many objects to keep in the buffer, rather than
|
||||
// how many bytes or characters.
|
||||
function validChunk(stream, state, chunk, cb) {
|
||||
var valid = true;
|
||||
if (!Buffer.isBuffer(chunk) &&
|
||||
'string' !== typeof chunk &&
|
||||
chunk !== null &&
|
||||
chunk !== undefined &&
|
||||
!state.objectMode) {
|
||||
var er = new TypeError('Invalid non-string/buffer chunk');
|
||||
stream.emit('error', er);
|
||||
process.nextTick(function() {
|
||||
cb(er);
|
||||
});
|
||||
valid = false;
|
||||
}
|
||||
return valid;
|
||||
}
|
||||
|
||||
Writable.prototype.write = function(chunk, encoding, cb) {
|
||||
var state = this._writableState;
|
||||
var ret = false;
|
||||
|
||||
if (typeof encoding === 'function') {
|
||||
cb = encoding;
|
||||
encoding = null;
|
||||
}
|
||||
|
||||
if (Buffer.isBuffer(chunk))
|
||||
encoding = 'buffer';
|
||||
else if (!encoding)
|
||||
encoding = state.defaultEncoding;
|
||||
|
||||
if (typeof cb !== 'function')
|
||||
cb = function() {};
|
||||
|
||||
if (state.ended)
|
||||
writeAfterEnd(this, state, cb);
|
||||
else if (validChunk(this, state, chunk, cb))
|
||||
ret = writeOrBuffer(this, state, chunk, encoding, cb);
|
||||
|
||||
return ret;
|
||||
};
|
||||
|
||||
function decodeChunk(state, chunk, encoding) {
|
||||
if (!state.objectMode &&
|
||||
state.decodeStrings !== false &&
|
||||
typeof chunk === 'string') {
|
||||
chunk = new Buffer(chunk, encoding);
|
||||
}
|
||||
return chunk;
|
||||
}
|
||||
|
||||
// if we're already writing something, then just put this
|
||||
// in the queue, and wait our turn. Otherwise, call _write
|
||||
// If we return false, then we need a drain event, so set that flag.
|
||||
function writeOrBuffer(stream, state, chunk, encoding, cb) {
|
||||
chunk = decodeChunk(state, chunk, encoding);
|
||||
if (Buffer.isBuffer(chunk))
|
||||
encoding = 'buffer';
|
||||
var len = state.objectMode ? 1 : chunk.length;
|
||||
|
||||
state.length += len;
|
||||
|
||||
var ret = state.length < state.highWaterMark;
|
||||
// we must ensure that previous needDrain will not be reset to false.
|
||||
if (!ret)
|
||||
state.needDrain = true;
|
||||
|
||||
if (state.writing)
|
||||
state.buffer.push(new WriteReq(chunk, encoding, cb));
|
||||
else
|
||||
doWrite(stream, state, len, chunk, encoding, cb);
|
||||
|
||||
return ret;
|
||||
}
|
||||
|
||||
function doWrite(stream, state, len, chunk, encoding, cb) {
|
||||
state.writelen = len;
|
||||
state.writecb = cb;
|
||||
state.writing = true;
|
||||
state.sync = true;
|
||||
stream._write(chunk, encoding, state.onwrite);
|
||||
state.sync = false;
|
||||
}
|
||||
|
||||
function onwriteError(stream, state, sync, er, cb) {
|
||||
if (sync)
|
||||
process.nextTick(function() {
|
||||
cb(er);
|
||||
});
|
||||
else
|
||||
cb(er);
|
||||
|
||||
stream._writableState.errorEmitted = true;
|
||||
stream.emit('error', er);
|
||||
}
|
||||
|
||||
function onwriteStateUpdate(state) {
|
||||
state.writing = false;
|
||||
state.writecb = null;
|
||||
state.length -= state.writelen;
|
||||
state.writelen = 0;
|
||||
}
|
||||
|
||||
function onwrite(stream, er) {
|
||||
var state = stream._writableState;
|
||||
var sync = state.sync;
|
||||
var cb = state.writecb;
|
||||
|
||||
onwriteStateUpdate(state);
|
||||
|
||||
if (er)
|
||||
onwriteError(stream, state, sync, er, cb);
|
||||
else {
|
||||
// Check if we're actually ready to finish, but don't emit yet
|
||||
var finished = needFinish(stream, state);
|
||||
|
||||
if (!finished && !state.bufferProcessing && state.buffer.length)
|
||||
clearBuffer(stream, state);
|
||||
|
||||
if (sync) {
|
||||
process.nextTick(function() {
|
||||
afterWrite(stream, state, finished, cb);
|
||||
});
|
||||
} else {
|
||||
afterWrite(stream, state, finished, cb);
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
function afterWrite(stream, state, finished, cb) {
|
||||
if (!finished)
|
||||
onwriteDrain(stream, state);
|
||||
cb();
|
||||
if (finished)
|
||||
finishMaybe(stream, state);
|
||||
}
|
||||
|
||||
// Must force callback to be called on nextTick, so that we don't
|
||||
// emit 'drain' before the write() consumer gets the 'false' return
|
||||
// value, and has a chance to attach a 'drain' listener.
|
||||
function onwriteDrain(stream, state) {
|
||||
if (state.length === 0 && state.needDrain) {
|
||||
state.needDrain = false;
|
||||
stream.emit('drain');
|
||||
}
|
||||
}
|
||||
|
||||
|
||||
// if there's something in the buffer waiting, then process it
|
||||
function clearBuffer(stream, state) {
|
||||
state.bufferProcessing = true;
|
||||
|
||||
for (var c = 0; c < state.buffer.length; c++) {
|
||||
var entry = state.buffer[c];
|
||||
var chunk = entry.chunk;
|
||||
var encoding = entry.encoding;
|
||||
var cb = entry.callback;
|
||||
var len = state.objectMode ? 1 : chunk.length;
|
||||
|
||||
doWrite(stream, state, len, chunk, encoding, cb);
|
||||
|
||||
// if we didn't call the onwrite immediately, then
|
||||
// it means that we need to wait until it does.
|
||||
// also, that means that the chunk and cb are currently
|
||||
// being processed, so move the buffer counter past them.
|
||||
if (state.writing) {
|
||||
c++;
|
||||
break;
|
||||
}
|
||||
}
|
||||
|
||||
state.bufferProcessing = false;
|
||||
if (c < state.buffer.length)
|
||||
state.buffer = state.buffer.slice(c);
|
||||
else
|
||||
state.buffer.length = 0;
|
||||
}
|
||||
|
||||
Writable.prototype._write = function(chunk, encoding, cb) {
|
||||
cb(new Error('not implemented'));
|
||||
};
|
||||
|
||||
Writable.prototype.end = function(chunk, encoding, cb) {
|
||||
var state = this._writableState;
|
||||
|
||||
if (typeof chunk === 'function') {
|
||||
cb = chunk;
|
||||
chunk = null;
|
||||
encoding = null;
|
||||
} else if (typeof encoding === 'function') {
|
||||
cb = encoding;
|
||||
encoding = null;
|
||||
}
|
||||
|
||||
if (typeof chunk !== 'undefined' && chunk !== null)
|
||||
this.write(chunk, encoding);
|
||||
|
||||
// ignore unnecessary end() calls.
|
||||
if (!state.ending && !state.finished)
|
||||
endWritable(this, state, cb);
|
||||
};
|
||||
|
||||
|
||||
function needFinish(stream, state) {
|
||||
return (state.ending &&
|
||||
state.length === 0 &&
|
||||
!state.finished &&
|
||||
!state.writing);
|
||||
}
|
||||
|
||||
function finishMaybe(stream, state) {
|
||||
var need = needFinish(stream, state);
|
||||
if (need) {
|
||||
state.finished = true;
|
||||
stream.emit('finish');
|
||||
}
|
||||
return need;
|
||||
}
|
||||
|
||||
function endWritable(stream, state, cb) {
|
||||
state.ending = true;
|
||||
finishMaybe(stream, state);
|
||||
if (cb) {
|
||||
if (state.finished)
|
||||
process.nextTick(cb);
|
||||
else
|
||||
stream.once('finish', cb);
|
||||
}
|
||||
state.ended = true;
|
||||
}
|
||||
94
node_modules/mongodb/node_modules/readable-stream/package.json
generated
vendored
Normal file
94
node_modules/mongodb/node_modules/readable-stream/package.json
generated
vendored
Normal file
@@ -0,0 +1,94 @@
|
||||
{
|
||||
"_args": [
|
||||
[
|
||||
"readable-stream@1.0.31",
|
||||
"/home/mitchell/Desktop/test-mywebsite/mywebsite/node_modules/mongodb"
|
||||
]
|
||||
],
|
||||
"_from": "readable-stream@1.0.31",
|
||||
"_id": "readable-stream@1.0.31",
|
||||
"_inCache": true,
|
||||
"_installable": true,
|
||||
"_location": "/mongodb/readable-stream",
|
||||
"_npmUser": {
|
||||
"email": "rod@vagg.org",
|
||||
"name": "rvagg"
|
||||
},
|
||||
"_npmVersion": "1.4.9",
|
||||
"_phantomChildren": {},
|
||||
"_requested": {
|
||||
"name": "readable-stream",
|
||||
"raw": "readable-stream@1.0.31",
|
||||
"rawSpec": "1.0.31",
|
||||
"scope": null,
|
||||
"spec": "1.0.31",
|
||||
"type": "version"
|
||||
},
|
||||
"_requiredBy": [
|
||||
"/mongodb"
|
||||
],
|
||||
"_resolved": "https://registry.npmjs.org/readable-stream/-/readable-stream-1.0.31.tgz",
|
||||
"_shasum": "8f2502e0bc9e3b0da1b94520aabb4e2603ecafae",
|
||||
"_shrinkwrap": null,
|
||||
"_spec": "readable-stream@1.0.31",
|
||||
"_where": "/home/mitchell/Desktop/test-mywebsite/mywebsite/node_modules/mongodb",
|
||||
"author": {
|
||||
"email": "i@izs.me",
|
||||
"name": "Isaac Z. Schlueter",
|
||||
"url": "http://blog.izs.me/"
|
||||
},
|
||||
"browser": {
|
||||
"util": false
|
||||
},
|
||||
"bugs": {
|
||||
"url": "https://github.com/isaacs/readable-stream/issues"
|
||||
},
|
||||
"dependencies": {
|
||||
"core-util-is": "~1.0.0",
|
||||
"inherits": "~2.0.1",
|
||||
"isarray": "0.0.1",
|
||||
"string_decoder": "~0.10.x"
|
||||
},
|
||||
"description": "Streams2, a user-land copy of the stream library from Node.js v0.10.x",
|
||||
"devDependencies": {
|
||||
"tap": "~0.2.6"
|
||||
},
|
||||
"directories": {},
|
||||
"dist": {
|
||||
"shasum": "8f2502e0bc9e3b0da1b94520aabb4e2603ecafae",
|
||||
"tarball": "http://registry.npmjs.org/readable-stream/-/readable-stream-1.0.31.tgz"
|
||||
},
|
||||
"homepage": "https://github.com/isaacs/readable-stream",
|
||||
"keywords": [
|
||||
"pipe",
|
||||
"readable",
|
||||
"stream"
|
||||
],
|
||||
"license": "MIT",
|
||||
"main": "readable.js",
|
||||
"maintainers": [
|
||||
{
|
||||
"name": "isaacs",
|
||||
"email": "i@izs.me"
|
||||
},
|
||||
{
|
||||
"name": "tootallnate",
|
||||
"email": "nathan@tootallnate.net"
|
||||
},
|
||||
{
|
||||
"name": "rvagg",
|
||||
"email": "rod@vagg.org"
|
||||
}
|
||||
],
|
||||
"name": "readable-stream",
|
||||
"optionalDependencies": {},
|
||||
"readme": "ERROR: No README data found!",
|
||||
"repository": {
|
||||
"type": "git",
|
||||
"url": "git://github.com/isaacs/readable-stream.git"
|
||||
},
|
||||
"scripts": {
|
||||
"test": "tap test/simple/*.js"
|
||||
},
|
||||
"version": "1.0.31"
|
||||
}
|
||||
1
node_modules/mongodb/node_modules/readable-stream/passthrough.js
generated
vendored
Normal file
1
node_modules/mongodb/node_modules/readable-stream/passthrough.js
generated
vendored
Normal file
@@ -0,0 +1 @@
|
||||
module.exports = require("./lib/_stream_passthrough.js")
|
||||
6
node_modules/mongodb/node_modules/readable-stream/readable.js
generated
vendored
Normal file
6
node_modules/mongodb/node_modules/readable-stream/readable.js
generated
vendored
Normal file
@@ -0,0 +1,6 @@
|
||||
exports = module.exports = require('./lib/_stream_readable.js');
|
||||
exports.Readable = exports;
|
||||
exports.Writable = require('./lib/_stream_writable.js');
|
||||
exports.Duplex = require('./lib/_stream_duplex.js');
|
||||
exports.Transform = require('./lib/_stream_transform.js');
|
||||
exports.PassThrough = require('./lib/_stream_passthrough.js');
|
||||
1
node_modules/mongodb/node_modules/readable-stream/transform.js
generated
vendored
Normal file
1
node_modules/mongodb/node_modules/readable-stream/transform.js
generated
vendored
Normal file
@@ -0,0 +1 @@
|
||||
module.exports = require("./lib/_stream_transform.js")
|
||||
1
node_modules/mongodb/node_modules/readable-stream/writable.js
generated
vendored
Normal file
1
node_modules/mongodb/node_modules/readable-stream/writable.js
generated
vendored
Normal file
@@ -0,0 +1 @@
|
||||
module.exports = require("./lib/_stream_writable.js")
|
||||
98
node_modules/mongodb/package.json
generated
vendored
Normal file
98
node_modules/mongodb/package.json
generated
vendored
Normal file
@@ -0,0 +1,98 @@
|
||||
{
|
||||
"_args": [
|
||||
[
|
||||
"mongodb@^2.0.0",
|
||||
"/home/mitchell/Desktop/test-mywebsite/mywebsite/node_modules/mongo-express"
|
||||
]
|
||||
],
|
||||
"_from": "mongodb@>=2.0.0 <3.0.0",
|
||||
"_id": "mongodb@2.1.3",
|
||||
"_inCache": true,
|
||||
"_installable": true,
|
||||
"_location": "/mongodb",
|
||||
"_nodeVersion": "4.2.4",
|
||||
"_npmUser": {
|
||||
"email": "christkv@gmail.com",
|
||||
"name": "christkv"
|
||||
},
|
||||
"_npmVersion": "2.14.12",
|
||||
"_phantomChildren": {
|
||||
"core-util-is": "1.0.2",
|
||||
"inherits": "2.0.1",
|
||||
"isarray": "0.0.1",
|
||||
"string_decoder": "0.10.31"
|
||||
},
|
||||
"_requested": {
|
||||
"name": "mongodb",
|
||||
"raw": "mongodb@^2.0.0",
|
||||
"rawSpec": "^2.0.0",
|
||||
"scope": null,
|
||||
"spec": ">=2.0.0 <3.0.0",
|
||||
"type": "range"
|
||||
},
|
||||
"_requiredBy": [
|
||||
"/mongo-express"
|
||||
],
|
||||
"_resolved": "https://registry.npmjs.org/mongodb/-/mongodb-2.1.3.tgz",
|
||||
"_shasum": "8c1eb443e77ac2ea10e281307b1692efcc2187b5",
|
||||
"_shrinkwrap": null,
|
||||
"_spec": "mongodb@^2.0.0",
|
||||
"_where": "/home/mitchell/Desktop/test-mywebsite/mywebsite/node_modules/mongo-express",
|
||||
"author": {
|
||||
"name": "Christian Kvalheim"
|
||||
},
|
||||
"bugs": {
|
||||
"url": "https://github.com/mongodb/node-mongodb-native/issues"
|
||||
},
|
||||
"dependencies": {
|
||||
"es6-promise": "3.0.2",
|
||||
"mongodb-core": "1.2.31",
|
||||
"readable-stream": "1.0.31"
|
||||
},
|
||||
"description": "MongoDB legacy driver emulation layer on top of mongodb-core",
|
||||
"devDependencies": {
|
||||
"bluebird": "2.9.27",
|
||||
"bson": "~0.4",
|
||||
"co": "4.5.4",
|
||||
"gleak": "0.5.0",
|
||||
"integra": "0.1.8",
|
||||
"jsdoc": "3.3.0-beta3",
|
||||
"mongodb-extended-json": "1.3.0",
|
||||
"mongodb-topology-manager": "1.0.x",
|
||||
"mongodb-version-manager": "^0.8.10",
|
||||
"optimist": "0.6.1",
|
||||
"rimraf": "2.2.6",
|
||||
"semver": "4.1.0"
|
||||
},
|
||||
"directories": {},
|
||||
"dist": {
|
||||
"shasum": "8c1eb443e77ac2ea10e281307b1692efcc2187b5",
|
||||
"tarball": "http://registry.npmjs.org/mongodb/-/mongodb-2.1.3.tgz"
|
||||
},
|
||||
"gitHead": "a38d2e4f6df0794466a66c41b46593ad84746dcd",
|
||||
"homepage": "https://github.com/mongodb/node-mongodb-native",
|
||||
"keywords": [
|
||||
"driver",
|
||||
"legacy",
|
||||
"mongodb"
|
||||
],
|
||||
"license": "Apache-2.0",
|
||||
"main": "index.js",
|
||||
"maintainers": [
|
||||
{
|
||||
"name": "christkv",
|
||||
"email": "christkv@gmail.com"
|
||||
}
|
||||
],
|
||||
"name": "mongodb",
|
||||
"optionalDependencies": {},
|
||||
"readme": "ERROR: No README data found!",
|
||||
"repository": {
|
||||
"type": "git",
|
||||
"url": "git+ssh://git@github.com/mongodb/node-mongodb-native.git"
|
||||
},
|
||||
"scripts": {
|
||||
"test": "node test/runner.js -t functional"
|
||||
},
|
||||
"version": "2.1.3"
|
||||
}
|
||||
73
node_modules/mongodb/t.js
generated
vendored
Normal file
73
node_modules/mongodb/t.js
generated
vendored
Normal file
@@ -0,0 +1,73 @@
|
||||
var MongoClient = require('./').MongoClient
|
||||
, assert = require('assert')
|
||||
, cappedCollectionName = "capped_test";
|
||||
|
||||
|
||||
function capitalizeFirstLetter(string) {
|
||||
return string.charAt(0).toUpperCase() + string.slice(1);
|
||||
}
|
||||
|
||||
function createTailedCursor(db, callback) {
|
||||
var collection = db.collection(cappedCollectionName)
|
||||
, cursor = collection.find({}, { tailable: true, awaitdata: true, numberOfRetries: Number.MAX_VALUE})
|
||||
, stream = cursor.stream()
|
||||
, statusGetters = ['notified', 'closed', 'dead', 'killed'];
|
||||
|
||||
console.log('After stream open');
|
||||
statusGetters.forEach(function (s) {
|
||||
var getter = 'is' + capitalizeFirstLetter(s);
|
||||
console.log("cursor " + getter + " => ", cursor[getter]());
|
||||
});
|
||||
|
||||
|
||||
stream.on('error', callback);
|
||||
stream.on('end', callback.bind(null, 'end'));
|
||||
stream.on('close', callback.bind(null, 'close'));
|
||||
stream.on('readable', callback.bind(null, 'readable'));
|
||||
stream.on('data', callback.bind(null, null, 'data'));
|
||||
|
||||
console.log('After stream attach events');
|
||||
statusGetters.forEach(function (s) {
|
||||
var getter = 'is' + capitalizeFirstLetter(s);
|
||||
console.log("cursor " + getter + " => ", cursor[getter]());
|
||||
});
|
||||
}
|
||||
|
||||
function cappedStreamEvent(err, evName, data) {
|
||||
if (err) {
|
||||
console.log("capped stream got error", err);
|
||||
return;
|
||||
}
|
||||
|
||||
if (evName) {
|
||||
console.log("capped stream got event", evName);
|
||||
}
|
||||
|
||||
if (data) {
|
||||
console.log("capped stream got data", data);
|
||||
}
|
||||
}
|
||||
|
||||
|
||||
// Connection URL
|
||||
var url = 'mongodb://localhost:27017/myproject';
|
||||
// Use connect method to connect to the Server
|
||||
MongoClient.connect(url, function(err, db) {
|
||||
assert.equal(null, err);
|
||||
console.log("Connected correctly to server");
|
||||
|
||||
db.createCollection(cappedCollectionName,
|
||||
{ "capped": true,
|
||||
"size": 100000,
|
||||
"max": 5000 },
|
||||
function(err, collection) {
|
||||
|
||||
assert.equal(null, err);
|
||||
console.log("Created capped collection " + cappedCollectionName);
|
||||
|
||||
createTailedCursor(db, cappedStreamEvent);
|
||||
});
|
||||
|
||||
|
||||
//db.close();
|
||||
});
|
||||
12
node_modules/mongodb/test.js
generated
vendored
Normal file
12
node_modules/mongodb/test.js
generated
vendored
Normal file
@@ -0,0 +1,12 @@
|
||||
var MongoClient = require('./').MongoClient;
|
||||
|
||||
MongoClient.connect('mongodb://localhost:27017/test', {
|
||||
server: {
|
||||
socketOptions: {
|
||||
connectTimeoutMS: 30000,
|
||||
socketTimeoutMS: 30000
|
||||
}
|
||||
}
|
||||
}, function(err, db) {
|
||||
db.close();
|
||||
});
|
||||
3
node_modules/mongodb/test_boot/boot.sh
generated
vendored
Normal file
3
node_modules/mongodb/test_boot/boot.sh
generated
vendored
Normal file
@@ -0,0 +1,3 @@
|
||||
mongod --verbose --smallfiles --noprealloc --setParameter enableTestCommands=1 \
|
||||
--port 39000 --dbpath ./data --sslMode allowSSL --sslCAFile ./ca.pem \
|
||||
--sslPEMKeyFile ./server_password.pem --sslPEMKeyPassword serverpassword --nojournal
|
||||
49
node_modules/mongodb/test_boot/ca.pem
generated
vendored
Normal file
49
node_modules/mongodb/test_boot/ca.pem
generated
vendored
Normal file
@@ -0,0 +1,49 @@
|
||||
-----BEGIN CERTIFICATE-----
|
||||
MIIDmDCCAoCgAwIBAgIDCZNjMA0GCSqGSIb3DQEBBQUAMHUxEjAQBgNVBAMTCWxv
|
||||
Y2FsaG9zdDEVMBMGA1UECxMMSW50ZWdyYXRpb25zMRAwDgYDVQQKEwdNb25nb0RC
|
||||
MRYwFAYDVQQHEw1OZXcgWW9yayBDaXR5MREwDwYDVQQIEwhOZXcgWW9yazELMAkG
|
||||
A1UEBhMCVVMwHhcNMTUxMDA1MTgwNzUyWhcNMzUxMDA1MTgwNzUyWjB1MRIwEAYD
|
||||
VQQDEwlsb2NhbGhvc3QxFTATBgNVBAsTDEludGVncmF0aW9uczEQMA4GA1UEChMH
|
||||
TW9uZ29EQjEWMBQGA1UEBxMNTmV3IFlvcmsgQ2l0eTERMA8GA1UECBMITmV3IFlv
|
||||
cmsxCzAJBgNVBAYTAlVTMIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEA
|
||||
hmVxb75ocS+U3gN9YFFVWayHNUX+YbIEt2a/GwBd2IGF+gCtUVZgnQy+Sd1ElDrE
|
||||
WO4SesU5zcsSdDdFQTzc73Pd6nyQ68WIr67w+W++EBF4y3ZSpRT8S1kjOWvh4UI4
|
||||
enf/hf0eHsrcXZUNS5UfjGmAeXp11OBgc6kFXMrOqvkb8QnA7us1BkJHbLCBc11H
|
||||
FUFQNY62J6EZ2AWjYsRRBA8C+5POIdlA4d0rD+4WrqoV/hP7Jry4lNOvRjy8POgR
|
||||
0nUu30XNuk33Qew4OvpqbFjxHfyivvrq242BJaj8aZR0KTJx4pNaCTpGexT6JQwg
|
||||
1wZ3GtLWg42mWcYUcp7NkwIDAQABozEwLzAMBgNVHRMEBTADAQH/MB8GA1UdEQQY
|
||||
MBaCCWxvY2FsaG9zdIIJMTI3LjAuMC4xMA0GCSqGSIb3DQEBBQUAA4IBAQACrNA2
|
||||
S6DGjp+18w+PiJzTcdgMnwMZzuLhDXrFtnndxT5BybsNNkY4QllGZCceRsqy4U78
|
||||
Lm54qHGKVATblH68t+Cg2gYpMOBnL6we/B1fl845CCLWnVGWRIYDQvjMS4TMQeAQ
|
||||
kgkB1VNujUibUncJ6V1Dz9W2Fm1lwwxO3vJgLyfq1pHHvOpryLRUE8dt6VnuVRvE
|
||||
mPdIBiLThLw12SV6ID2bU4ueomrSGztrqU8JJCLNaLtPZYdEcrEEmGRurk1vdK60
|
||||
ETxs1UnnpctCTLq03+iw/MQzvNpvbsKvim6i/p6Txi+16I9Xo9uSilpXeiI8NF+M
|
||||
QiwvYpMwBAsBDPFY
|
||||
-----END CERTIFICATE-----
|
||||
-----BEGIN RSA PRIVATE KEY-----
|
||||
MIIEpQIBAAKCAQEAhmVxb75ocS+U3gN9YFFVWayHNUX+YbIEt2a/GwBd2IGF+gCt
|
||||
UVZgnQy+Sd1ElDrEWO4SesU5zcsSdDdFQTzc73Pd6nyQ68WIr67w+W++EBF4y3ZS
|
||||
pRT8S1kjOWvh4UI4enf/hf0eHsrcXZUNS5UfjGmAeXp11OBgc6kFXMrOqvkb8QnA
|
||||
7us1BkJHbLCBc11HFUFQNY62J6EZ2AWjYsRRBA8C+5POIdlA4d0rD+4WrqoV/hP7
|
||||
Jry4lNOvRjy8POgR0nUu30XNuk33Qew4OvpqbFjxHfyivvrq242BJaj8aZR0KTJx
|
||||
4pNaCTpGexT6JQwg1wZ3GtLWg42mWcYUcp7NkwIDAQABAoIBAAONQsY1PgVfwItt
|
||||
hC67d4n7KUebJfHnBoDiUkKga9qA8c19nxb7NWpxH8ZL9rgEJZREFnDwLWrm5wHz
|
||||
Q/1JYn1hjofX6uyQN+CMuwmcjmVWUN7vrZd3SoOjQ28NDtcpLNzvtu0Pl3Aw5EoE
|
||||
2mkY89B+KUsMlXiexW3JFb8yDOwvx1HRSS5XX1uX8q7oyGt+J+p7b8Y029D09TdE
|
||||
qflOQnimE83SKpCvsmMQsFFGBe+1aazTSQbsCWyrSe/Hucmbhj4I8HqO9KxIQGV4
|
||||
yrB2hqisC1OMcmYKNnNboGD4Cv/FH5QIjhmIfYxsmVQXjBN6TDV/NvWzoN10YkO/
|
||||
Ivn4XEECgYEA0MZRdQmXrZ31yPdC63SjTGPD3hrj+UnQs09PffOqblMcJyz+Xxtc
|
||||
BdOUi4yM1jFdgAOlWJs6RH9Jbv1sbPekcuDtIOeIa6pK/WrlGJVVjUshshXKfvf3
|
||||
BKyaLI13ydo+4yJx8IqYnnzUhh1XTxq9oZOg/Qji0z3A50eGHeJjNlsCgYEApMwI
|
||||
2dg8z7CH3v9DHe+QrlTDd8YHQs5T7B4DabmPknIypNaR1EtT+2L3IBwpM9FZY2S1
|
||||
5jWf9qPrav17iGwXTR/ElD/z46c6c2Cbfz1osFC0lgCLIHQnTNMA968wBRAz9ovB
|
||||
9rBHujsVFf3VStCbZw2uIUBC/vlY6HAMH8kWmykCgYEAlu1n4eZHYtn434nxNOqj
|
||||
SwwsgxqJiKRWXeqOGw3WPd+TULb0Rakvw36Lur6m6gnSAEOiewXs/6vNWJGYmsYE
|
||||
epAknU8EjmETyVRHvpfmuzh/aZDbTSLRzu1An8i9uo0uPWGfYTew0ZKIyVPY1HUO
|
||||
dgqVrZeI8xGHrStFx8Dr3FUCgYEAmeOKVFtQFiXkC0JVW4wNkC3JRnye5SylQ7WX
|
||||
PFAymO+ZCULxdMxPBU4aNVr2wtO45E70aaAj236g5Q0GIZPGgpeqy0GZLzpHI9zP
|
||||
+BbRPuxuZjDluo2c8WmM2r42WlQSIiXKH+HnaSmo/BtH5RA+PhKRS62lolIYSx3u
|
||||
xHPUb1kCgYEAsHdcgO/uu2xYFKGTA4REoDZFhZ1BFPFcEOJfppXOs/u8/jfQqoIg
|
||||
BMtspjqQv4BNhEE8T9ant6W6KZDGjE72/DI/Ei9a2260Ef3suUnYzFXRZTIy1/uz
|
||||
U3SLb99bl60sUMxo2mYCn7mwgaFr0n+EFRKHcMPV6RfHVZO//VqCvmA=
|
||||
-----END RSA PRIVATE KEY-----
|
||||
48
node_modules/mongodb/test_boot/client.pem
generated
vendored
Normal file
48
node_modules/mongodb/test_boot/client.pem
generated
vendored
Normal file
@@ -0,0 +1,48 @@
|
||||
-----BEGIN CERTIFICATE-----
|
||||
MIIDhDCCAmygAwIBAgIDBigQMA0GCSqGSIb3DQEBBQUAMHUxEjAQBgNVBAMTCWxv
|
||||
Y2FsaG9zdDEVMBMGA1UECxMMSW50ZWdyYXRpb25zMRAwDgYDVQQKEwdNb25nb0RC
|
||||
MRYwFAYDVQQHEw1OZXcgWW9yayBDaXR5MREwDwYDVQQIEwhOZXcgWW9yazELMAkG
|
||||
A1UEBhMCVVMwHhcNMTUxMDA1MTgxMDE2WhcNMzUxMDA1MTgxMDE2WjBvMRIwEAYD
|
||||
VQQDEwlsb2NhbGhvc3QxDzANBgNVBAsTBktlcm5lbDEQMA4GA1UEChMHTW9uZ29E
|
||||
QjEWMBQGA1UEBxMNTmV3IFlvcmsgQ2l0eTERMA8GA1UECBMITmV3IFlvcmsxCzAJ
|
||||
BgNVBAYTAlVTMIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEArstPxdOf
|
||||
/QkCW85DOV5BMAAtjzdccONRlS4X7I0Jf96BkZmO1xXHQDJIdFZ7ejFc3HAUwIvH
|
||||
HMT4PRmlFDXKfvdRSBgzOWm7KBCWHj9P+pZWtGB82W1/8nn6mIlucfAeS5QoUd0T
|
||||
8Oxgm7yyPTTpPLO3OnhlihEvtrxsp7AJ8qsB3OCTpX2W3dm4wYS/PwOGFrGvz/GY
|
||||
h+mgZBXVuk3jkl+qPadNJaoaRE01COCWXWP90G9/iSjcXs7RhHDWD6PA9NRxYgCO
|
||||
bmzrEdrRViA5X70LuAy2NoitxVDzClzJeSrUwV1PfsPa9/hG0yDhCPW6FayoqZ7g
|
||||
eSl0LR4rlCm8zQIDAQABoyMwITAfBgNVHREEGDAWgglsb2NhbGhvc3SCCTEyNy4w
|
||||
LjAuMTANBgkqhkiG9w0BAQUFAAOCAQEAO6sdYHpgQT4ADqWDcRysLKk8QYAoIcx7
|
||||
Z14bplQ+xHVuDGzgzAoQhoVGCNXYYgkvlB7z/1bdDCBkjIeVLkC4W9p/x8RZdKK8
|
||||
rd7vxiBBfXSZzFxHZP9fY1k0r0/oWYGqelstCllOQ9s1SXQU+AbHxPtQ+2xABf0a
|
||||
Pk71TymsNTNPx+ZqcP0OBZDNHaGYmy7QHo5/J+c7dNRUIRdB/sQmf20I2+/TaZVq
|
||||
otBRw4dWp4X7wqtlYwnK0K12ujb7O5xoB6IjiYibpof455fTFg8SLW/lT9+NFjoV
|
||||
78EhJ53vt+rdidLDZlme+GyJ6tZZajYYkZniOph8kLflDZ5yzU5ZIA==
|
||||
-----END CERTIFICATE-----
|
||||
-----BEGIN RSA PRIVATE KEY-----
|
||||
MIIEpAIBAAKCAQEArstPxdOf/QkCW85DOV5BMAAtjzdccONRlS4X7I0Jf96BkZmO
|
||||
1xXHQDJIdFZ7ejFc3HAUwIvHHMT4PRmlFDXKfvdRSBgzOWm7KBCWHj9P+pZWtGB8
|
||||
2W1/8nn6mIlucfAeS5QoUd0T8Oxgm7yyPTTpPLO3OnhlihEvtrxsp7AJ8qsB3OCT
|
||||
pX2W3dm4wYS/PwOGFrGvz/GYh+mgZBXVuk3jkl+qPadNJaoaRE01COCWXWP90G9/
|
||||
iSjcXs7RhHDWD6PA9NRxYgCObmzrEdrRViA5X70LuAy2NoitxVDzClzJeSrUwV1P
|
||||
fsPa9/hG0yDhCPW6FayoqZ7geSl0LR4rlCm8zQIDAQABAoIBABFYHT9NA3gRN8my
|
||||
Sttbbe58Zgjq/mI5PDQoUbloOUQV6RdwfEr9XzsQgj4BAtmaIlPBT13XwNcq+FLf
|
||||
w5Cg0coNcmRL+nE8+5Q7KRoxQBdmaY4WcO0GP4bo8aPHJ01o/SiUt/d/j2X9nox1
|
||||
id4m7OnYD1te7PxpPinMwp9VCXeptDZK0m052lB0ZjvVSC8onSZtqE03o6x9Gagv
|
||||
cKqwuPVGECt8081j4ev8BnvmzsdtEtXi1+GB4PCjTSroZIJXj0L+TUQStd9fFp37
|
||||
poMqzrCMTohyDvZ6GlrdDJhHP9HZkUCORvNlqKjMWgbfNCeId1oaAAf9QUMjgFT2
|
||||
3tsAWWECgYEA5NzSj7LNSnohKKkCq3lPEKpmVPMFwOrK5y/XMhl6x1ftYP/YRkEx
|
||||
43aO8D9VHJdXCwxVhUc/iyP9YjIMlhvUa+HEX66mpGtWUO1rU51EyMCa+XEHnFtB
|
||||
1SaEgOfHlJYl/52o30oGPpQ1cCV4GTpreQz2lo0GOK+rw9cI0auD2MUCgYEAw4U6
|
||||
53vyznG1Ek+PvJj+GeT1yGy6rYamkkJprCplIl6flFS4Vybuu/I87nWkuN5smTV9
|
||||
Tuu/ae56ILLlZQvIBphqAuJQ+uWky07mwH6pNsc/ZW3Z8HYaNhPbx9zAvfT0aaoK
|
||||
q5VcfdzaElld3Xf8+O6Ex1rxBe+leEKWy5CTxGkCgYEAzQLu7+CYu5WprHWxuY9i
|
||||
fyhVoNVOGyF4fx5A5FUkwfuG3jRxNYXjPcKPirEA+iqrw27aERjg2DB2I4d39OOb
|
||||
JsTIF3opNvIcEUSMtwsTec3nRXm1+p9Y2qq1WY/HHdhe05sUkmkKLPaw4/P8oITP
|
||||
j1FId1Z+8zqSYE15TfEZm40CgYEAumQvhHBgR7iUoxEf+O5tByiDuv8Uc8qyzynq
|
||||
7Ys5AKvuO40Mc2EVJC1JMxcJ4oeARgp9HBiYURdcbAG/uDWkCc2Q+pDgHF+CW6Mr
|
||||
TgUxsZv+7uXrZ1ZfpIAH/J+RS0XTH0xQgcSQ+Rfq+KUjW7dAfdt8z0P+E+ei+mOY
|
||||
uBVrGUkCgYBsL8E68LZBRjtOfgwJz6lNalkgZdeexcnNiQ9O5qrElRrY8nh2Ds98
|
||||
ifNsqD3f+fFp0PPSGymqeWl7R7rjCTpUvpuo2bFR7ctpV36cT40enKKq3Sw1sgCb
|
||||
gRkQqjjNqYu+tHKXpQbKXh1zKW/o2Hkls+r8H/F3N7q0ujZ8ADbnYw==
|
||||
-----END RSA PRIVATE KEY-----
|
||||
51
node_modules/mongodb/test_boot/client_password.pem
generated
vendored
Normal file
51
node_modules/mongodb/test_boot/client_password.pem
generated
vendored
Normal file
@@ -0,0 +1,51 @@
|
||||
-----BEGIN CERTIFICATE-----
|
||||
MIIDhDCCAmygAwIBAgIDCXYCMA0GCSqGSIb3DQEBBQUAMHUxEjAQBgNVBAMTCWxv
|
||||
Y2FsaG9zdDEVMBMGA1UECxMMSW50ZWdyYXRpb25zMRAwDgYDVQQKEwdNb25nb0RC
|
||||
MRYwFAYDVQQHEw1OZXcgWW9yayBDaXR5MREwDwYDVQQIEwhOZXcgWW9yazELMAkG
|
||||
A1UEBhMCVVMwHhcNMTUxMDA1MTgxNTU5WhcNMzUxMDA1MTgxNTU5WjBvMRIwEAYD
|
||||
VQQDEwlsb2NhbGhvc3QxDzANBgNVBAsTBktlcm5lbDEQMA4GA1UEChMHTW9uZ29E
|
||||
QjEWMBQGA1UEBxMNTmV3IFlvcmsgQ2l0eTERMA8GA1UECBMITmV3IFlvcmsxCzAJ
|
||||
BgNVBAYTAlVTMIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEAta9xPfZM
|
||||
/1BOGOb4Q370yEXBzbbZozUvrk8DddhoGPNtbIsQZqsgLltfipjOOdk2I0hQlIxl
|
||||
lzVY6lEj4DQgji2N7v36ncTv0zPPfkCA8bCqQkRqps8rOiai9m/IzAHf0Xn1BQeT
|
||||
T+nloK5lVja0cMObSIz03BH+fjoG7hokdkwdouzmnJD+/sBxFlf5viFILeqkp6cs
|
||||
ig74hDecCrjSNg4ggOj2ovZT06AklLLHCqfKxJFTfbbU3bC2WSBDbEEEFx+wBdo4
|
||||
PQFP/Tb8m0azVhvHQcZfPCpau5chzfvZNeUyzFGZ18hzevC1xOMyCkYMPaCISHdC
|
||||
/tBd2PSWZUZJdwIDAQABoyMwITAfBgNVHREEGDAWgglsb2NhbGhvc3SCCTEyNy4w
|
||||
LjAuMTANBgkqhkiG9w0BAQUFAAOCAQEAHSu12dUtCLepzr6ORukco5PdIUdQNFsX
|
||||
KYc9kcTdriSP/b0K8cqAOBM13f/kLJDlnWh7ztkMqkBTl3ERxxPxNWamxknFoZxd
|
||||
pECOTH4AEB2vyrQ9T8HXuv21pZsJM00d+hf1Ye89MzdXz72T3m8t+djkFFCdLXDI
|
||||
f47CA+7WX1L2+426J2VnEsoRRcN0zCMtUj2XqE/wy+ASUOpLDwAHxstOcMZETLX8
|
||||
6PTZ3+vKIc00xL2H9A/4RjQXMHEovwtANZdtYX1uWK5Xz2Eq081/KqnpNoKZe/rr
|
||||
b6dF0JbkYq3J+BI/OOjJWOpHgUKWg3I6aosw6g6MJ+xsGImplF1DtQ==
|
||||
-----END CERTIFICATE-----
|
||||
-----BEGIN ENCRYPTED PRIVATE KEY-----
|
||||
MIIFHzBJBgkqhkiG9w0BBQ0wPDAbBgkqhkiG9w0BBQwwDgQIkyaQoZbRA4sCAggA
|
||||
MB0GCWCGSAFlAwQBAgQQhq07PWgmZVtZv7vehsHg3wSCBNDcvkYZfWfS8YqnyxzO
|
||||
r+OaYFQKysdRrnHHqYZNwuSd8Piobau203jQvRvYtk90BwNso9joBU1f5B3FASJg
|
||||
33rkE0Ls/XmVSpRnRRDRbMos/xAjoQAmer6c4mFFpPN8Dk5DEPbu8U5mH4vxn68Q
|
||||
WyoXw7xxyAr586nykShRA00vE900Zhi+ajt4McU55CR46WwYkT4rciUpWpIpBUct
|
||||
VXjZJiz8DAvUnVFR6ALmCMrOT17CQNoQwCjOX944myRwJCqijsnwl8BD+km0H6/J
|
||||
7lUpQpIyIgFOhHk5f/VJEVeMdiVpohXTFRglkzuj/TNGqW05W9USJ/w3RoSIz/OV
|
||||
HmhErNpkODejwcVmEtQIFSJkiKWGgDdoY0W/a/OKhvk0vQtwb9X3aVN84EFwinks
|
||||
olBMmTJ/zRla61Or91KYchu5jSd2CPA3u+vqDuKMkavMYPTFg2RyrH/8FrPvr8EL
|
||||
aLQjqY8pgaw4mIidljlhQAYAgfaJFxkbRHQBKdWXSbRQEk0fQuQfgXL7wA5RKl2R
|
||||
RsP0ZkxsDmpyb9drR1yyLAeWVDRx1QkEbgZqku1I+lpKnUT/n1Kmi2mb0CdV61T7
|
||||
vt2T4em1Yj2Qff9IUhEdCfDKF1AYWYnA6wPgE3DUuVZ+WXh+21s5WvMbWe9II9kT
|
||||
i3ZMLmCXr9B3x7nGdY4DBmW3XX/23pPIDX7h03eyJezHZDNa/LYho6RP17WDnJCr
|
||||
ExfpQ6Vabq19vTDl5YWIaTIJsBKRjkUZrJNgRpgSUl5sUQtOlL1BL6+kr4cACd43
|
||||
jba9HNivQ3RslSj6d3mhqsxv0XfrOaJmkoSNL7UJ3JblRfyUQhxOeBtQowaNE67d
|
||||
omYz7PcyJcKNtDTpwpKpj0tGQ7gva+EtKHr+ZkCuHUC5Rbl2bhhWIn8YR2ptm9s6
|
||||
eieBv4beS7jd9L+dBTibYeWtqVyO5P9OKrDXn4FPyu5H6Wd9ajk8tkYrPm32l3t3
|
||||
Zrn943LWaxDYHYr9/72HzY+0LAoy3bTayNqWvVWod7VKoy/I++z21CzodWOlWDCy
|
||||
XExhQDmlCcXAuY7dxYObXhjdygaflPkLlF9Rm/akJrMi9jKrvY6cM8WJpNAK3Zv9
|
||||
O+p9owg8rpuADoTEREiE6AgWfWOLeySMTs+Zt96sg6z2e5JyFhuU0FAlLqZGz2pX
|
||||
4jNEc0/a5hX1COETuaWd5RngLXn1vCmPnfDqfr22UTk+YkxOxRT3ZbF53D9o5vli
|
||||
c+on9dDYChEg7Uoxt4a51RsTWGMZfhtmIjiVd8mRCZr4hhrCJqPh4lUJ3qIJQtrO
|
||||
omaV0JTFjpU0AhRzMt+RgITgM2ESmxE53FAT1E+zsySUKh/+/mPoeQ3eQDoDDf+q
|
||||
nx9qLlvHrtd9eo2FfaWLyXWNN9KYNSCJ7u58s4Uz5+1a11+riy7I6hFPfnErFa0F
|
||||
KAyUf7mvHATO1gFre9ILBnZzJ46QlWbMQE11d1ZfJM8st1nmNg+wHkzDoFTVi49F
|
||||
nZu8pGZtkTQqrdzNhmjdVMXOzU0FfnxRuX2/iT+KmyfibzcE0gPQRoQia3FpI2rY
|
||||
4FBmFMovgsehjjIWQkvBkbLQxGDmQgvXy1MAfA23XrMh1dvIYgEP+lotxNjtbk8M
|
||||
KPll2OWzO0tXaj+OTreP/C5JTQ==
|
||||
-----END ENCRYPTED PRIVATE KEY-----
|
||||
29
node_modules/mongodb/test_boot/connect.js
generated
vendored
Normal file
29
node_modules/mongodb/test_boot/connect.js
generated
vendored
Normal file
@@ -0,0 +1,29 @@
|
||||
var MongoClient = require('../').MongoClient,
|
||||
fs = require('fs');
|
||||
|
||||
var cert = fs.readFileSync(__dirname + "/client.pem");
|
||||
var ca = fs.readFileSync(__dirname + "/ca.pem");
|
||||
var key = fs.readFileSync(__dirname + "/client_password.pem");
|
||||
|
||||
var options = {
|
||||
server: {
|
||||
sslValidate: false,
|
||||
sslCert: cert,
|
||||
sslKey: cert
|
||||
}
|
||||
}
|
||||
|
||||
// var options = { uri_decode_auth: true,
|
||||
// db: {},
|
||||
// server:
|
||||
// { sslCA: '-----BEGIN CERTIFICATE-----\nMIIDmDCCAoCgAwIBAgIDCZNjMA0GCSqGSIb3DQEBBQUAMHUxEjAQBgNVBAMTCWxv\nY2FsaG9zdDEVMBMGA1UECxMMSW50ZWdyYXRpb25zMRAwDgYDVQQKEwdNb25nb0RC\nMRYwFAYDVQQHEw1OZXcgWW9yayBDaXR5MREwDwYDVQQIEwhOZXcgWW9yazELMAkG\nA1UEBhMCVVMwHhcNMTUxMDA1MTgwNzUyWhcNMzUxMDA1MTgwNzUyWjB1MRIwEAYD\nVQQDEwlsb2NhbGhvc3QxFTATBgNVBAsTDEludGVncmF0aW9uczEQMA4GA1UEChMH\nTW9uZ29EQjEWMBQGA1UEBxMNTmV3IFlvcmsgQ2l0eTERMA8GA1UECBMITmV3IFlv\ncmsxCzAJBgNVBAYTAlVTMIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEA\nhmVxb75ocS+U3gN9YFFVWayHNUX+YbIEt2a/GwBd2IGF+gCtUVZgnQy+Sd1ElDrE\nWO4SesU5zcsSdDdFQTzc73Pd6nyQ68WIr67w+W++EBF4y3ZSpRT8S1kjOWvh4UI4\nenf/hf0eHsrcXZUNS5UfjGmAeXp11OBgc6kFXMrOqvkb8QnA7us1BkJHbLCBc11H\nFUFQNY62J6EZ2AWjYsRRBA8C+5POIdlA4d0rD+4WrqoV/hP7Jry4lNOvRjy8POgR\n0nUu30XNuk33Qew4OvpqbFjxHfyivvrq242BJaj8aZR0KTJx4pNaCTpGexT6JQwg\n1wZ3GtLWg42mWcYUcp7NkwIDAQABozEwLzAMBgNVHRMEBTADAQH/MB8GA1UdEQQY\nMBaCCWxvY2FsaG9zdIIJMTI3LjAuMC4xMA0GCSqGSIb3DQEBBQUAA4IBAQACrNA2\nS6DGjp+18w+PiJzTcdgMnwMZzuLhDXrFtnndxT5BybsNNkY4QllGZCceRsqy4U78\nLm54qHGKVATblH68t+Cg2gYpMOBnL6we/B1fl845CCLWnVGWRIYDQvjMS4TMQeAQ\nkgkB1VNujUibUncJ6V1Dz9W2Fm1lwwxO3vJgLyfq1pHHvOpryLRUE8dt6VnuVRvE\nmPdIBiLThLw12SV6ID2bU4ueomrSGztrqU8JJCLNaLtPZYdEcrEEmGRurk1vdK60\nETxs1UnnpctCTLq03+iw/MQzvNpvbsKvim6i/p6Txi+16I9Xo9uSilpXeiI8NF+M\nQiwvYpMwBAsBDPFY\n-----END CERTIFICATE-----\n-----BEGIN RSA PRIVATE KEY-----\nMIIEpQIBAAKCAQEAhmVxb75ocS+U3gN9YFFVWayHNUX+YbIEt2a/GwBd2IGF+gCt\nUVZgnQy+Sd1ElDrEWO4SesU5zcsSdDdFQTzc73Pd6nyQ68WIr67w+W++EBF4y3ZS\npRT8S1kjOWvh4UI4enf/hf0eHsrcXZUNS5UfjGmAeXp11OBgc6kFXMrOqvkb8QnA\n7us1BkJHbLCBc11HFUFQNY62J6EZ2AWjYsRRBA8C+5POIdlA4d0rD+4WrqoV/hP7\nJry4lNOvRjy8POgR0nUu30XNuk33Qew4OvpqbFjxHfyivvrq242BJaj8aZR0KTJx\n4pNaCTpGexT6JQwg1wZ3GtLWg42mWcYUcp7NkwIDAQABAoIBAAONQsY1PgVfwItt\nhC67d4n7KUebJfHnBoDiUkKga9qA8c19nxb7NWpxH8ZL9rgEJZREFnDwLWrm5wHz\nQ/1JYn1hjofX6uyQN+CMuwmcjmVWUN7vrZd3SoOjQ28NDtcpLNzvtu0Pl3Aw5EoE\n2mkY89B+KUsMlXiexW3JFb8yDOwvx1HRSS5XX1uX8q7oyGt+J+p7b8Y029D09TdE\nqflOQnimE83SKpCvsmMQsFFGBe+1aazTSQbsCWyrSe/Hucmbhj4I8HqO9KxIQGV4\nyrB2hqisC1OMcmYKNnNboGD4Cv/FH5QIjhmIfYxsmVQXjBN6TDV/NvWzoN10YkO/\nIvn4XEECgYEA0MZRdQmXrZ31yPdC63SjTGPD3hrj+UnQs09PffOqblMcJyz+Xxtc\nBdOUi4yM1jFdgAOlWJs6RH9Jbv1sbPekcuDtIOeIa6pK/WrlGJVVjUshshXKfvf3\nBKyaLI13ydo+4yJx8IqYnnzUhh1XTxq9oZOg/Qji0z3A50eGHeJjNlsCgYEApMwI\n2dg8z7CH3v9DHe+QrlTDd8YHQs5T7B4DabmPknIypNaR1EtT+2L3IBwpM9FZY2S1\n5jWf9qPrav17iGwXTR/ElD/z46c6c2Cbfz1osFC0lgCLIHQnTNMA968wBRAz9ovB\n9rBHujsVFf3VStCbZw2uIUBC/vlY6HAMH8kWmykCgYEAlu1n4eZHYtn434nxNOqj\nSwwsgxqJiKRWXeqOGw3WPd+TULb0Rakvw36Lur6m6gnSAEOiewXs/6vNWJGYmsYE\nepAknU8EjmETyVRHvpfmuzh/aZDbTSLRzu1An8i9uo0uPWGfYTew0ZKIyVPY1HUO\ndgqVrZeI8xGHrStFx8Dr3FUCgYEAmeOKVFtQFiXkC0JVW4wNkC3JRnye5SylQ7WX\nPFAymO+ZCULxdMxPBU4aNVr2wtO45E70aaAj236g5Q0GIZPGgpeqy0GZLzpHI9zP\n+BbRPuxuZjDluo2c8WmM2r42WlQSIiXKH+HnaSmo/BtH5RA+PhKRS62lolIYSx3u\nxHPUb1kCgYEAsHdcgO/uu2xYFKGTA4REoDZFhZ1BFPFcEOJfppXOs/u8/jfQqoIg\nBMtspjqQv4BNhEE8T9ant6W6KZDGjE72/DI/Ei9a2260Ef3suUnYzFXRZTIy1/uz\nU3SLb99bl60sUMxo2mYCn7mwgaFr0n+EFRKHcMPV6RfHVZO//VqCvmA=\n-----END RSA PRIVATE KEY-----',
|
||||
// sslCert: '-----BEGIN CERTIFICATE-----\nMIIDhDCCAmygAwIBAgIDBigQMA0GCSqGSIb3DQEBBQUAMHUxEjAQBgNVBAMTCWxv\nY2FsaG9zdDEVMBMGA1UECxMMSW50ZWdyYXRpb25zMRAwDgYDVQQKEwdNb25nb0RC\nMRYwFAYDVQQHEw1OZXcgWW9yayBDaXR5MREwDwYDVQQIEwhOZXcgWW9yazELMAkG\nA1UEBhMCVVMwHhcNMTUxMDA1MTgxMDE2WhcNMzUxMDA1MTgxMDE2WjBvMRIwEAYD\nVQQDEwlsb2NhbGhvc3QxDzANBgNVBAsTBktlcm5lbDEQMA4GA1UEChMHTW9uZ29E\nQjEWMBQGA1UEBxMNTmV3IFlvcmsgQ2l0eTERMA8GA1UECBMITmV3IFlvcmsxCzAJ\nBgNVBAYTAlVTMIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEArstPxdOf\n/QkCW85DOV5BMAAtjzdccONRlS4X7I0Jf96BkZmO1xXHQDJIdFZ7ejFc3HAUwIvH\nHMT4PRmlFDXKfvdRSBgzOWm7KBCWHj9P+pZWtGB82W1/8nn6mIlucfAeS5QoUd0T\n8Oxgm7yyPTTpPLO3OnhlihEvtrxsp7AJ8qsB3OCTpX2W3dm4wYS/PwOGFrGvz/GY\nh+mgZBXVuk3jkl+qPadNJaoaRE01COCWXWP90G9/iSjcXs7RhHDWD6PA9NRxYgCO\nbmzrEdrRViA5X70LuAy2NoitxVDzClzJeSrUwV1PfsPa9/hG0yDhCPW6FayoqZ7g\neSl0LR4rlCm8zQIDAQABoyMwITAfBgNVHREEGDAWgglsb2NhbGhvc3SCCTEyNy4w\nLjAuMTANBgkqhkiG9w0BAQUFAAOCAQEAO6sdYHpgQT4ADqWDcRysLKk8QYAoIcx7\nZ14bplQ+xHVuDGzgzAoQhoVGCNXYYgkvlB7z/1bdDCBkjIeVLkC4W9p/x8RZdKK8\nrd7vxiBBfXSZzFxHZP9fY1k0r0/oWYGqelstCllOQ9s1SXQU+AbHxPtQ+2xABf0a\nPk71TymsNTNPx+ZqcP0OBZDNHaGYmy7QHo5/J+c7dNRUIRdB/sQmf20I2+/TaZVq\notBRw4dWp4X7wqtlYwnK0K12ujb7O5xoB6IjiYibpof455fTFg8SLW/lT9+NFjoV\n78EhJ53vt+rdidLDZlme+GyJ6tZZajYYkZniOph8kLflDZ5yzU5ZIA==\n-----END CERTIFICATE-----\n-----BEGIN RSA PRIVATE KEY-----\nMIIEpAIBAAKCAQEArstPxdOf/QkCW85DOV5BMAAtjzdccONRlS4X7I0Jf96BkZmO\n1xXHQDJIdFZ7ejFc3HAUwIvHHMT4PRmlFDXKfvdRSBgzOWm7KBCWHj9P+pZWtGB8\n2W1/8nn6mIlucfAeS5QoUd0T8Oxgm7yyPTTpPLO3OnhlihEvtrxsp7AJ8qsB3OCT\npX2W3dm4wYS/PwOGFrGvz/GYh+mgZBXVuk3jkl+qPadNJaoaRE01COCWXWP90G9/\niSjcXs7RhHDWD6PA9NRxYgCObmzrEdrRViA5X70LuAy2NoitxVDzClzJeSrUwV1P\nfsPa9/hG0yDhCPW6FayoqZ7geSl0LR4rlCm8zQIDAQABAoIBABFYHT9NA3gRN8my\nSttbbe58Zgjq/mI5PDQoUbloOUQV6RdwfEr9XzsQgj4BAtmaIlPBT13XwNcq+FLf\nw5Cg0coNcmRL+nE8+5Q7KRoxQBdmaY4WcO0GP4bo8aPHJ01o/SiUt/d/j2X9nox1\nid4m7OnYD1te7PxpPinMwp9VCXeptDZK0m052lB0ZjvVSC8onSZtqE03o6x9Gagv\ncKqwuPVGECt8081j4ev8BnvmzsdtEtXi1+GB4PCjTSroZIJXj0L+TUQStd9fFp37\npoMqzrCMTohyDvZ6GlrdDJhHP9HZkUCORvNlqKjMWgbfNCeId1oaAAf9QUMjgFT2\n3tsAWWECgYEA5NzSj7LNSnohKKkCq3lPEKpmVPMFwOrK5y/XMhl6x1ftYP/YRkEx\n43aO8D9VHJdXCwxVhUc/iyP9YjIMlhvUa+HEX66mpGtWUO1rU51EyMCa+XEHnFtB\n1SaEgOfHlJYl/52o30oGPpQ1cCV4GTpreQz2lo0GOK+rw9cI0auD2MUCgYEAw4U6\n53vyznG1Ek+PvJj+GeT1yGy6rYamkkJprCplIl6flFS4Vybuu/I87nWkuN5smTV9\nTuu/ae56ILLlZQvIBphqAuJQ+uWky07mwH6pNsc/ZW3Z8HYaNhPbx9zAvfT0aaoK\nq5VcfdzaElld3Xf8+O6Ex1rxBe+leEKWy5CTxGkCgYEAzQLu7+CYu5WprHWxuY9i\nfyhVoNVOGyF4fx5A5FUkwfuG3jRxNYXjPcKPirEA+iqrw27aERjg2DB2I4d39OOb\nJsTIF3opNvIcEUSMtwsTec3nRXm1+p9Y2qq1WY/HHdhe05sUkmkKLPaw4/P8oITP\nj1FId1Z+8zqSYE15TfEZm40CgYEAumQvhHBgR7iUoxEf+O5tByiDuv8Uc8qyzynq\n7Ys5AKvuO40Mc2EVJC1JMxcJ4oeARgp9HBiYURdcbAG/uDWkCc2Q+pDgHF+CW6Mr\nTgUxsZv+7uXrZ1ZfpIAH/J+RS0XTH0xQgcSQ+Rfq+KUjW7dAfdt8z0P+E+ei+mOY\nuBVrGUkCgYBsL8E68LZBRjtOfgwJz6lNalkgZdeexcnNiQ9O5qrElRrY8nh2Ds98\nifNsqD3f+fFp0PPSGymqeWl7R7rjCTpUvpuo2bFR7ctpV36cT40enKKq3Sw1sgCb\ngRkQqjjNqYu+tHKXpQbKXh1zKW/o2Hkls+r8H/F3N7q0ujZ8ADbnYw==\n-----END RSA PRIVATE KEY-----',
|
||||
// sslKey: '-----BEGIN CERTIFICATE-----\nMIIDhDCCAmygAwIBAgIDBigQMA0GCSqGSIb3DQEBBQUAMHUxEjAQBgNVBAMTCWxv\nY2FsaG9zdDEVMBMGA1UECxMMSW50ZWdyYXRpb25zMRAwDgYDVQQKEwdNb25nb0RC\nMRYwFAYDVQQHEw1OZXcgWW9yayBDaXR5MREwDwYDVQQIEwhOZXcgWW9yazELMAkG\nA1UEBhMCVVMwHhcNMTUxMDA1MTgxMDE2WhcNMzUxMDA1MTgxMDE2WjBvMRIwEAYD\nVQQDEwlsb2NhbGhvc3QxDzANBgNVBAsTBktlcm5lbDEQMA4GA1UEChMHTW9uZ29E\nQjEWMBQGA1UEBxMNTmV3IFlvcmsgQ2l0eTERMA8GA1UECBMITmV3IFlvcmsxCzAJ\nBgNVBAYTAlVTMIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEArstPxdOf\n/QkCW85DOV5BMAAtjzdccONRlS4X7I0Jf96BkZmO1xXHQDJIdFZ7ejFc3HAUwIvH\nHMT4PRmlFDXKfvdRSBgzOWm7KBCWHj9P+pZWtGB82W1/8nn6mIlucfAeS5QoUd0T\n8Oxgm7yyPTTpPLO3OnhlihEvtrxsp7AJ8qsB3OCTpX2W3dm4wYS/PwOGFrGvz/GY\nh+mgZBXVuk3jkl+qPadNJaoaRE01COCWXWP90G9/iSjcXs7RhHDWD6PA9NRxYgCO\nbmzrEdrRViA5X70LuAy2NoitxVDzClzJeSrUwV1PfsPa9/hG0yDhCPW6FayoqZ7g\neSl0LR4rlCm8zQIDAQABoyMwITAfBgNVHREEGDAWgglsb2NhbGhvc3SCCTEyNy4w\nLjAuMTANBgkqhkiG9w0BAQUFAAOCAQEAO6sdYHpgQT4ADqWDcRysLKk8QYAoIcx7\nZ14bplQ+xHVuDGzgzAoQhoVGCNXYYgkvlB7z/1bdDCBkjIeVLkC4W9p/x8RZdKK8\nrd7vxiBBfXSZzFxHZP9fY1k0r0/oWYGqelstCllOQ9s1SXQU+AbHxPtQ+2xABf0a\nPk71TymsNTNPx+ZqcP0OBZDNHaGYmy7QHo5/J+c7dNRUIRdB/sQmf20I2+/TaZVq\notBRw4dWp4X7wqtlYwnK0K12ujb7O5xoB6IjiYibpof455fTFg8SLW/lT9+NFjoV\n78EhJ53vt+rdidLDZlme+GyJ6tZZajYYkZniOph8kLflDZ5yzU5ZIA==\n-----END CERTIFICATE-----\n-----BEGIN RSA PRIVATE KEY-----\nMIIEpAIBAAKCAQEArstPxdOf/QkCW85DOV5BMAAtjzdccONRlS4X7I0Jf96BkZmO\n1xXHQDJIdFZ7ejFc3HAUwIvHHMT4PRmlFDXKfvdRSBgzOWm7KBCWHj9P+pZWtGB8\n2W1/8nn6mIlucfAeS5QoUd0T8Oxgm7yyPTTpPLO3OnhlihEvtrxsp7AJ8qsB3OCT\npX2W3dm4wYS/PwOGFrGvz/GYh+mgZBXVuk3jkl+qPadNJaoaRE01COCWXWP90G9/\niSjcXs7RhHDWD6PA9NRxYgCObmzrEdrRViA5X70LuAy2NoitxVDzClzJeSrUwV1P\nfsPa9/hG0yDhCPW6FayoqZ7geSl0LR4rlCm8zQIDAQABAoIBABFYHT9NA3gRN8my\nSttbbe58Zgjq/mI5PDQoUbloOUQV6RdwfEr9XzsQgj4BAtmaIlPBT13XwNcq+FLf\nw5Cg0coNcmRL+nE8+5Q7KRoxQBdmaY4WcO0GP4bo8aPHJ01o/SiUt/d/j2X9nox1\nid4m7OnYD1te7PxpPinMwp9VCXeptDZK0m052lB0ZjvVSC8onSZtqE03o6x9Gagv\ncKqwuPVGECt8081j4ev8BnvmzsdtEtXi1+GB4PCjTSroZIJXj0L+TUQStd9fFp37\npoMqzrCMTohyDvZ6GlrdDJhHP9HZkUCORvNlqKjMWgbfNCeId1oaAAf9QUMjgFT2\n3tsAWWECgYEA5NzSj7LNSnohKKkCq3lPEKpmVPMFwOrK5y/XMhl6x1ftYP/YRkEx\n43aO8D9VHJdXCwxVhUc/iyP9YjIMlhvUa+HEX66mpGtWUO1rU51EyMCa+XEHnFtB\n1SaEgOfHlJYl/52o30oGPpQ1cCV4GTpreQz2lo0GOK+rw9cI0auD2MUCgYEAw4U6\n53vyznG1Ek+PvJj+GeT1yGy6rYamkkJprCplIl6flFS4Vybuu/I87nWkuN5smTV9\nTuu/ae56ILLlZQvIBphqAuJQ+uWky07mwH6pNsc/ZW3Z8HYaNhPbx9zAvfT0aaoK\nq5VcfdzaElld3Xf8+O6Ex1rxBe+leEKWy5CTxGkCgYEAzQLu7+CYu5WprHWxuY9i\nfyhVoNVOGyF4fx5A5FUkwfuG3jRxNYXjPcKPirEA+iqrw27aERjg2DB2I4d39OOb\nJsTIF3opNvIcEUSMtwsTec3nRXm1+p9Y2qq1WY/HHdhe05sUkmkKLPaw4/P8oITP\nj1FId1Z+8zqSYE15TfEZm40CgYEAumQvhHBgR7iUoxEf+O5tByiDuv8Uc8qyzynq\n7Ys5AKvuO40Mc2EVJC1JMxcJ4oeARgp9HBiYURdcbAG/uDWkCc2Q+pDgHF+CW6Mr\nTgUxsZv+7uXrZ1ZfpIAH/J+RS0XTH0xQgcSQ+Rfq+KUjW7dAfdt8z0P+E+ei+mOY\nuBVrGUkCgYBsL8E68LZBRjtOfgwJz6lNalkgZdeexcnNiQ9O5qrElRrY8nh2Ds98\nifNsqD3f+fFp0PPSGymqeWl7R7rjCTpUvpuo2bFR7ctpV36cT40enKKq3Sw1sgCb\ngRkQqjjNqYu+tHKXpQbKXh1zKW/o2Hkls+r8H/F3N7q0ujZ8ADbnYw==\n-----END RSA PRIVATE KEY-----' },
|
||||
// replSet: { connectWithNoPrimary: true },
|
||||
// mongos: {} }
|
||||
|
||||
MongoClient.connect('mongodb://localhost:39000/test?ssl=true?', options, function(err) {
|
||||
console.dir(err);
|
||||
|
||||
})
|
||||
|
||||
2
node_modules/mongodb/test_boot/data/WiredTiger
generated
vendored
Normal file
2
node_modules/mongodb/test_boot/data/WiredTiger
generated
vendored
Normal file
@@ -0,0 +1,2 @@
|
||||
WiredTiger
|
||||
WiredTiger 2.6.2: (June 4, 2015)
|
||||
1
node_modules/mongodb/test_boot/data/WiredTiger.lock
generated
vendored
Normal file
1
node_modules/mongodb/test_boot/data/WiredTiger.lock
generated
vendored
Normal file
@@ -0,0 +1 @@
|
||||
WiredTiger lock file
|
||||
6
node_modules/mongodb/test_boot/data/WiredTiger.turtle
generated
vendored
Normal file
6
node_modules/mongodb/test_boot/data/WiredTiger.turtle
generated
vendored
Normal file
@@ -0,0 +1,6 @@
|
||||
WiredTiger version string
|
||||
WiredTiger 2.6.2: (June 4, 2015)
|
||||
WiredTiger version
|
||||
major=2,minor=6,patch=2
|
||||
file:WiredTiger.wt
|
||||
allocation_size=4KB,app_metadata=,block_allocation=best,block_compressor=,cache_resident=0,checkpoint=(WiredTigerCheckpoint.41=(addr="018581e4c6553bd08681e44d72bf598981e42f54fd43808080e28fc0e3027fc0",order=41,time=1444229807,size=172032,write_gen=80)),checkpoint_lsn=(4294967295,9223372036854775807),checksum=uncompressed,collator=,columns=,dictionary=0,encryption=(keyid=,name=),format=btree,huffman_key=,huffman_value=,id=0,internal_item_max=0,internal_key_max=0,internal_key_truncate=,internal_page_max=4KB,key_format=S,key_gap=10,leaf_item_max=0,leaf_key_max=0,leaf_page_max=32KB,leaf_value_max=0,log=(enabled=),memory_page_max=5MB,os_cache_dirty_max=0,os_cache_max=0,prefix_compression=0,prefix_compression_min=4,split_deepen_min_child=0,split_deepen_per_child=0,split_pct=75,value_format=S,version=(major=1,minor=1)
|
||||
BIN
node_modules/mongodb/test_boot/data/WiredTiger.wt
generated
vendored
Normal file
BIN
node_modules/mongodb/test_boot/data/WiredTiger.wt
generated
vendored
Normal file
Binary file not shown.
BIN
node_modules/mongodb/test_boot/data/WiredTigerLAS.wt
generated
vendored
Normal file
BIN
node_modules/mongodb/test_boot/data/WiredTigerLAS.wt
generated
vendored
Normal file
Binary file not shown.
BIN
node_modules/mongodb/test_boot/data/_mdb_catalog.wt
generated
vendored
Normal file
BIN
node_modules/mongodb/test_boot/data/_mdb_catalog.wt
generated
vendored
Normal file
Binary file not shown.
BIN
node_modules/mongodb/test_boot/data/collection-0-757073248613337118.wt
generated
vendored
Normal file
BIN
node_modules/mongodb/test_boot/data/collection-0-757073248613337118.wt
generated
vendored
Normal file
Binary file not shown.
BIN
node_modules/mongodb/test_boot/data/diagnostic.data/metrics.2015-10-07T14-44-37Z-00000
generated
vendored
Normal file
BIN
node_modules/mongodb/test_boot/data/diagnostic.data/metrics.2015-10-07T14-44-37Z-00000
generated
vendored
Normal file
Binary file not shown.
BIN
node_modules/mongodb/test_boot/data/diagnostic.data/metrics.2015-10-07T14-45-15Z-00000
generated
vendored
Normal file
BIN
node_modules/mongodb/test_boot/data/diagnostic.data/metrics.2015-10-07T14-45-15Z-00000
generated
vendored
Normal file
Binary file not shown.
BIN
node_modules/mongodb/test_boot/data/diagnostic.data/metrics.2015-10-07T14-46-31Z-00000
generated
vendored
Normal file
BIN
node_modules/mongodb/test_boot/data/diagnostic.data/metrics.2015-10-07T14-46-31Z-00000
generated
vendored
Normal file
Binary file not shown.
BIN
node_modules/mongodb/test_boot/data/diagnostic.data/metrics.2015-10-07T14-47-25Z-00000
generated
vendored
Normal file
BIN
node_modules/mongodb/test_boot/data/diagnostic.data/metrics.2015-10-07T14-47-25Z-00000
generated
vendored
Normal file
Binary file not shown.
BIN
node_modules/mongodb/test_boot/data/diagnostic.data/metrics.2015-10-07T14-49-07Z-00000
generated
vendored
Normal file
BIN
node_modules/mongodb/test_boot/data/diagnostic.data/metrics.2015-10-07T14-49-07Z-00000
generated
vendored
Normal file
Binary file not shown.
BIN
node_modules/mongodb/test_boot/data/diagnostic.data/metrics.2015-10-07T14-50-41Z-00000
generated
vendored
Normal file
BIN
node_modules/mongodb/test_boot/data/diagnostic.data/metrics.2015-10-07T14-50-41Z-00000
generated
vendored
Normal file
Binary file not shown.
BIN
node_modules/mongodb/test_boot/data/diagnostic.data/metrics.2015-10-07T14-50-53Z-00000
generated
vendored
Normal file
BIN
node_modules/mongodb/test_boot/data/diagnostic.data/metrics.2015-10-07T14-50-53Z-00000
generated
vendored
Normal file
Binary file not shown.
BIN
node_modules/mongodb/test_boot/data/diagnostic.data/metrics.2015-10-07T14-52-31Z-00000
generated
vendored
Normal file
BIN
node_modules/mongodb/test_boot/data/diagnostic.data/metrics.2015-10-07T14-52-31Z-00000
generated
vendored
Normal file
Binary file not shown.
BIN
node_modules/mongodb/test_boot/data/diagnostic.data/metrics.2015-10-07T14-54-53Z-00000
generated
vendored
Normal file
BIN
node_modules/mongodb/test_boot/data/diagnostic.data/metrics.2015-10-07T14-54-53Z-00000
generated
vendored
Normal file
Binary file not shown.
BIN
node_modules/mongodb/test_boot/data/diagnostic.data/metrics.2015-10-07T14-55-09Z-00000
generated
vendored
Normal file
BIN
node_modules/mongodb/test_boot/data/diagnostic.data/metrics.2015-10-07T14-55-09Z-00000
generated
vendored
Normal file
Binary file not shown.
BIN
node_modules/mongodb/test_boot/data/diagnostic.data/metrics.2015-10-07T14-55-38Z-00000
generated
vendored
Normal file
BIN
node_modules/mongodb/test_boot/data/diagnostic.data/metrics.2015-10-07T14-55-38Z-00000
generated
vendored
Normal file
Binary file not shown.
BIN
node_modules/mongodb/test_boot/data/index-1-757073248613337118.wt
generated
vendored
Normal file
BIN
node_modules/mongodb/test_boot/data/index-1-757073248613337118.wt
generated
vendored
Normal file
Binary file not shown.
0
node_modules/mongodb/test_boot/data/mongod.lock
generated
vendored
Executable file
0
node_modules/mongodb/test_boot/data/mongod.lock
generated
vendored
Executable file
BIN
node_modules/mongodb/test_boot/data/sizeStorer.wt
generated
vendored
Normal file
BIN
node_modules/mongodb/test_boot/data/sizeStorer.wt
generated
vendored
Normal file
Binary file not shown.
BIN
node_modules/mongodb/test_boot/data/storage.bson
generated
vendored
Normal file
BIN
node_modules/mongodb/test_boot/data/storage.bson
generated
vendored
Normal file
Binary file not shown.
51
node_modules/mongodb/test_boot/server_password.pem
generated
vendored
Normal file
51
node_modules/mongodb/test_boot/server_password.pem
generated
vendored
Normal file
@@ -0,0 +1,51 @@
|
||||
-----BEGIN CERTIFICATE-----
|
||||
MIIDhDCCAmygAwIBAgIDBHSUMA0GCSqGSIb3DQEBBQUAMHUxEjAQBgNVBAMTCWxv
|
||||
Y2FsaG9zdDEVMBMGA1UECxMMSW50ZWdyYXRpb25zMRAwDgYDVQQKEwdNb25nb0RC
|
||||
MRYwFAYDVQQHEw1OZXcgWW9yayBDaXR5MREwDwYDVQQIEwhOZXcgWW9yazELMAkG
|
||||
A1UEBhMCVVMwHhcNMTUxMDA1MTgxNDM1WhcNMzUxMDA1MTgxNDM1WjBvMRIwEAYD
|
||||
VQQDEwlsb2NhbGhvc3QxDzANBgNVBAsTBktlcm5lbDEQMA4GA1UEChMHTW9uZ29E
|
||||
QjEWMBQGA1UEBxMNTmV3IFlvcmsgQ2l0eTERMA8GA1UECBMITmV3IFlvcmsxCzAJ
|
||||
BgNVBAYTAlVTMIIBIjANBgkqhkiG9w0BAQEFAAOCAQ8AMIIBCgKCAQEAh0BxOBv1
|
||||
b61NuBT+9K6CxW1+z0gDUwq9uq3kNQ5GTS80CUsEBWVWDypeXEwG5S2DyfbRLGAp
|
||||
B1DwPJEWagxpVYrGTcLaOYyI0NsSP+e1mGUKosDRPrMuFYjFRGAJ5CGSlOLrDsnI
|
||||
nRg9PeqUCrcrnn/SopQOQL9LKQgXg8F2d2RHgjpWlbs70soV8WlSQmdIlaemwBgY
|
||||
4PzyVmzEn1bd50FK3xicqiDd9sYbbB66Touj7qofUd3EhbzGDZ+L4/CPPB9LOrFk
|
||||
7CKgyrG26c+HNxLBgU2lqImBaBz756taHFl5kMm81b9MVtDxqcLMkar3mSZhQkJV
|
||||
C/eDMVSV6Dji4QIDAQABoyMwITAfBgNVHREEGDAWgglsb2NhbGhvc3SCCTEyNy4w
|
||||
LjAuMTANBgkqhkiG9w0BAQUFAAOCAQEAOpkRV0MpPBQEiHwzCyHmCyDL7SWL5yZY
|
||||
zThKu3+falIdSFur3S+h7Bcgj7BVTKe368OLfBiTxLjgwFu5ohItuvxAiJ3J1+2n
|
||||
awnJFhNoayboURA7pjrretTsUOOkozPD9DMtw8wdfZ8itCB2/jysmzm/yTKvEfLv
|
||||
dkTcm6woUwJUA7o8QaIrL0bxyOlgfy0+5NCm8anVj53a42yCyL1Wz6NX4+kbzntN
|
||||
F6h4wrjK0DvgdJsQNXz035BRo0NZfygHk5nH4OwqzjadIfQsNyKz7z8xcR2142nl
|
||||
vh2Gt0LE9NHyQGAnjClEMH/z1OizmYoWrXJWJ3sOAQqO3D5Xdd5M8Q==
|
||||
-----END CERTIFICATE-----
|
||||
-----BEGIN ENCRYPTED PRIVATE KEY-----
|
||||
MIIFHzBJBgkqhkiG9w0BBQ0wPDAbBgkqhkiG9w0BBQwwDgQI+rkdko8FHaICAggA
|
||||
MB0GCWCGSAFlAwQBAgQQb46Jd6bUgVumEFwAal391QSCBND2HRh6uf7JCluFj9pA
|
||||
Iy2iyLw1zqBOGuGcnaIBD2B/2tQ1P8sIC0khVy0d3/S+eLl/B6FG4eWNaLVxboSy
|
||||
BXHnWu1cYl8N4c2xhg4W7UNCfT9o1I7xqak5m2F38ImsRBm0nK/8HVx9yxuwzPDt
|
||||
IhNbSlsD7rht++8czm+89akgCZ3XtJK6JUyOCV0b1CHJPyM6EyG/3fS8zGK2eHFn
|
||||
FA7Q7V19VyaRs4ZB/Xo0bQH/H5wyQqiybxUB8DdnGPbedGlFc6ewzYdxw9krQxek
|
||||
vZRQRaB6L+0F8XwukcE8zldBlrdEWQIv/Jl0GfyDtnsFYAPzdAyMjmDf1V7+vJVZ
|
||||
Lq4bmKK3A2DtuR1zivLY/9LPZMIss9JMsGscfl/v1n3OmjtZloJ1fDvnIpoouXG1
|
||||
g0p+BhwmVE65YwzDlzDvDD4v/hpvqEJKLcq6cBVmFzrXyWbx5udtRH+MAcu3xVeF
|
||||
rvjaSyD2rSQMxySq+3uF+01rS8JgcYh4jkpCRgdwoF6zV896vmsou+FxmTf9G/Mk
|
||||
WDjBugGW8wXXVuX01BniiFbYDMcKs2T8v7tOpXC4fJaM/+/J7GypJNoWedwa325L
|
||||
Got8mFbrVvZ665IoODqUe6MT1JHTMjuTLmnkRelOpPQEOPKjOLGvJA6Auhpdr8Jb
|
||||
ZBId1jALfu39e1gIJK1OOaWva7VJfBVpTeruItcwCP0vjK1hOxJcl435RWp4PN59
|
||||
hUww9bH84DD4RodtIUoFb8qiFZUtMFb3WQD98ovV71J3FjOK0ie3My39h9Disc5L
|
||||
yb1Y+b6y3Fv3BaDP7lywCtZZmOeo1NhY734YjqWNMecHjGbPWpkYMdORDT4BEVUS
|
||||
9FPzTULkxEhVCFjY3vsVuo7QRD0dsdGoZsQ+fJt6nPYghmWTLYYCUT5Dm1pSsOXt
|
||||
bRIKMz6UT80WcnO4IfWtCTiP8gUi0KGKiQBV+IRXO5dGyb6myfyvsSmg26QxaUE/
|
||||
cn257JbO9PheQbETwOz59ehoBeV676jXS6vTl99K1jWHcQxL+TuY5B5WG2I7/zAm
|
||||
jG0NvS5h6S6mU55ZrMVCM/ZH86zkzZRGAm2WyXuVOTZaPK3HXKTcLwEHV/K4eYjY
|
||||
Gnr24i0zosqjqBQsHHrQXitzbit7MVgEiSvGcoiobO2jRltROUQYaZbhStS9F4O+
|
||||
adA5NOkJqqIbrGypJAU3gSlhSxqXMb4WTn6FDRYfw1QBlHcU5CSvuGXSiiwiKi4U
|
||||
gni6fiDqXS2o3Op14FDC98OtzkaCKEFvIsr8HpG2B+Ol6hIScmqDJgLqR3vLZJgx
|
||||
aiFNgMo6b9O8lc36yVGRk+nSaMfw90LL9pxdwxeKXhpG39N9yZo2G7KNAckZCube
|
||||
6BZyMerjIPWVxpQc8hRFTU+RZG3NRNjyNwaPDU7geJuOy0Ez3AoCVM3ct4rM599X
|
||||
oPka+cBp7ZbzTufBmiHVAwM+kZJkqX+O82QU8GHuYewHUTuYNFIFM7x+GiTa2YiD
|
||||
aNPdoM2xayvCowXiizBvVrHn9DB2m1FHupGa7OuKpyiNDArsKQIhslP46mqPu2V0
|
||||
BiXRr7dd2zFvCCQP3l3n7zM58yW1geQUYGsrsScJgKppIc2QttD2iiv0TEZ0ZTrN
|
||||
+GVpzwmLTBMKBVHu7ZW3fM/MUg==
|
||||
-----END ENCRYPTED PRIVATE KEY-----
|
||||
Reference in New Issue
Block a user