python,rethinkdb,rethinkdb-python,reql
So as mentioned in the discussions, the numbers in the query should not be quoted, unless they are strings. Unquote the numbers should make it work.
You can manually map r.db('mydb').table('employees') .eqJoin('dept', r.db('mydb').table('departments')) .map(function(doc) { return { personName: left("name"), departmentName: right("name"), // etc. } }) ...
The problem is that your task is asynchronous, but you're treating it as an asynchronous task. Your task finishes and your process exists, but you still haven't created your table and executed your insert. What you need to do is add one line right underneath registerTask that creates a done...
you can use the map and reduce command for that: r.table("test_pagol").filter( r.row("timestamp").ge(1429617902988) .and(r.row("timestamp").le(1429617922119)) ).map({ 'total_page_position': r.row('position'), 'total_page_load': r.row('page') }).reduce(function(left, right) { return { 'total_page_position': left('total_page_position').add(right('total_page_position')), 'total_page_load': left('total_page_load').add(right('total_page_load')) } }).merge({ 'CRT': r.row('total_page_load').div(r.row('total_page_position')) }) More...
You can access the primary index with get or getAll. So if you rewrite your first query to be r.db('rethinkdb_faker').table('contacts').get(3453) it should be much faster.
I'm not sure why you're getting that that value is a function, but I don't think you can just run JavaScript inside the function. Keep in mind that in entry inside the replace function is not an JS object. It's a ReQL object. How about trying the following: var migrateDb...
javascript,database,nosql,rethinkdb
You can map each element of the array to a boolean (whether the value for "field" is "val"), then reduce with a and. So something like this: reql.filter(function(tb) { return tb("one").map(function(element) { return element("field").eq("val") }).reduce(function(left, right) { return left.and(right) }) }) ...
python,lambda,rethinkdb,rethinkdb-python
I think you should be able to do something like this: tags = ["school", "hollywood"] r.db("test").table("posts").filter( lambda post: post["tags"].contains(lambda tag: r.expr(tags).contains(tag) ) ).run(conn) See http://rethinkdb.com/api/python/contains/...
I would do it like this: r.table('t1').filter(function(parent) { return r.table('t2').get_all(parent('ParentId'), {index: 'ParentId'}).count().eq(0); }) ...
You could do this: ``` r.table('posts').filter(function(post) { return post.merge({field1: value1, field2: value2}).ne(post) }) ``` Basically merge the post with the object you want to "match" against, and if that changes the value of post then include the post in the output....
Update plus the changeAt term: r.table('blog').get("1a48c847-4fee-4968-8cfd-5f8369c01f64").update(function(row){ return { sections: row('sections').changeAt(1, row('sections')(1).merge({title: "s2-modified"})) } } The above is good if you already know the index of the item you want to change. If you need to find the index, then update it, you can use the .indexesOf command to look up...
You can get your answer as grouped data like so: r.table('test').group(function(row) { return r.epochTime(row('created_ts')).hours(); }).sum('cost') If you want the exact format you specified, you can do it like this: r.table('test').group(function(row) { return r.epochTime(row('created_ts')).hours(); }).sum('cost').ungroup().map(function(gr) { return {hour: gr('group'), cost: gr('reduction')} }) ...
You can nest ReQL expression (including querying another table inside a method (like filter). You want to use some indexes here to make things faster. r.table('tasks').createIndex('user_id').run() r.table('users').filter(...).filter(function(user) { return r.table('users').getAll(user('id'), {index: 'user_id'}).isEmpty() }) ...
This appears to have been an oversight when adding export/import of secondary indexes - the import script is looking for the indexes field in the info, which doesn't exist when importing a single file. This can be worked around by providing the flag --no-secondary-indexes. A fix was released in the...
This is because the Data Explorer only allows JavaScript as an input. You need to switch to something like this to make it work: r.table('posts').map(function(post){return 1}) ...
Whenever you get the "sequence of sequences" error, the solution usually involves using concatMap. In this case, I think this will get you what you want: r.db('db').table('table') .concatMap(r.row('locations')) .filter(r.row('alerts')('person').contains(200)) concatMap with the identity function function(x){return x} is also known as "flatten" (though there isn't a reql term called flatten) Edit:...
Try: r.db('db').table('table').get('id').update({ "field-to-remove": r.literal(), "field-to-update": "new-value" }) You don't need to use replace here since you don't care about explicitly setting the other fields....
From the docs, it appears that there is the instance FromDatum a => Result [a] Which explains why your getAllUsers function works, however, there is not an instance for FromDatum a => Result a Instead, it appears the author of this API would rather you use the instance FromDatum a...
It seems that what you want is you want every document/row to represent a different type of event. It seems that what you want is insert a new document into the database if that event type doesn't exist in the database and add a total_event property with a count of...
Check URL What happens if you do the http request by itself? I tried it and g0t a 404 on that resource. I would run that command first and make sure it works: r.http("91231cd2.ngrok.io/data/geojson/MP14_REGION_WEB_PL_FLAT.json") Try r.args r.polygon expects 3 arguments, but you're passing it one. You might try using r.args...
The group command with multi should do what you want: heroes.group('magazine_titles', multi=True)['name'] ...
Stealing the answer from the user group: There's currently no good way to operate on anything except doubles in RethinkDB. (We'll probably add support for other numeric types in the future.) If you just need to store and retrieve longs, you could store them as strings. If you know that...
First, let's clarify the relationship between socket.io and RethinkDB changefeeds. Socket.io is intended for realtime communication between client (the browser) and the server (Node.js). RethinkDB changfeeds are way for your server (Node.js) to listen to changes in the database. The client can't communicate with RethinkDB directly. A very typical architecture...
Michael Lucy of RethinkDB Wrote: For .get.changes and .order_by.limit.changes you should be fine because we already send the initial value of the query for those. For other queries, the only way to do that right now is to subscribe to changes on the query, execute the query, and then read...
I actually met a similar problem and here is what I did: .group(r.row('date').toEpochTime().sub(r.row('date').toEpochTime().mod(<INTERVAL_IN_SECONDS>)) What this do is to group time by <INTERVAL_IN_SECONDS> I don't know if this is the best way for the task but it works for me....
The answer is... it depends on what you want to do and how your query is structured. #1 Simple Queries If you have a simple filter query that is listening for changes on all 'users' from a particular country, your query won't get reevaluated every single time. RethinkDB handles that...
ReQL's orderBy term can take a function and order by its result. I'm not certain about the exact syntax in Thinky (it should be similar), but this is how you would do it in plain ReQL: r.table(...).orderBy(function (row) { return row('sorting_index'); }) The function function (row) { return row('sorting_index'); }...
javascript,node.js,rethinkdb,koa
I believe this is the correct way. This approach is used by Baucis, and also koa-mongo-rest with Mongoose. It can be even taken a bit further. An example url like: /api/users?conditions={"name":"john", "city":"London"}&limit=10&skip=1&sort=-zipcode Can be processed by following code: findAll: function*(next) { yield next; var error, result; try { var conditions...
Whenever you group something, you need to then ungroup it. So, for your query, you need to add ungroup at the end. r.table('artists')('artist')('aliases').group('name').count().ungroup() That being said, If all you want is get the number of aliases for every artist, group is not really the method for that. You're better of...
Unfortunately, you're going to need a server. It might be node.js or it might be another language, but you'll need a server. RethinkDB is not Firebase. It can't be queried from your browser. If you absolutely need browser side querying and can't have a server, you should use Firbase. If...
You can use unittest.mock to mock the low-level API you are using to implement the wrapper and then use asserts to check calls to the API from the wrapper. I don't know much about django models or rethinkdb but it could look somethink like this. import uuid import unittest import...
Try this to connect: r.connect(host="localhost", port=28015).repl() And make sure that the server and the driver has a matching version (at least the first two numbers). rethinkdb --version pip freeze | grep rethinkdb If they don't, update the server/driver....
this way you can create a topic and channels --- var hellodd; hellodd = new nsq.Reader('part_one', 'one_channel', { lookupdHTTPAddresses: localhost:4060 });...
You can write something like: o = {a: {aa: "aa1", aaa: "aaa1"}, b: "b1"} r.expr(o).do({a: r.row('a')('aa').default(null), b: r.row('b').default(null)}) ...
#1 So, basically you want all documents that still haven't expired? (valid_to) is in the future? In order to do that, you should definitely use RethinkDB time/date functions. In order to do though, you have to create a time instance with either r.time or r. ISO8601. An example of this...
You can use the update command along with filter to filter the elements in an array and pass to along to update. r.table('30848200').get(1).update(function (row) { return { 'things': row('things') .filter(function (item) { return item('name').ne('b') }) } }) Basically, you'll be overwriting things with the filtered array....
You can group by a partial field by passing an anonymous functions to your group method. Any time you want some special behavior out of a group function think about in anonymous functions (lambda functions). In this case, you can use the match method to pass a regular expression that...
If you're using JavaScript, you can just insert a JavaScript object into RethinkDB. Just make sure to convert your API response into a JS object. var obj = { type: 'type', property: 'property' }; r.table("api") .insert(obj) .run(conn, callback) Take a look at the documentation for insert. Keep in mind that...
Assuming your dates are currently stored as strings? You dont say... Here is some sample data... [ {"date":"2014-12-11T20:00:41.000Z","id":"cd6e152a-9df7-49bc-a887-b43ef5cb559d","name":"dude2"}, {"date":"2014-12-11T21:00:41.000Z","id":"5f5f2cef-9853-4400-ad6e-4fa26ff5469b","name":"dude1"}, {"date":"2014-12-11T19:00:41.000Z","id":"651cef31-4560-4bca-b458-ce43aa8c0c90","name":"dude3"} ] With this query... r.db('mydb').table('test').update({ date: r.ISO8601(r.row('date')) }); Data becomes... [...
r.table('users').filter( lambda user: ~user.has_fields('likes_to_eat', 'likes_to_drink') ).run(conn) Edit: Made a mistake and used contains intead of has_fields....
RethinkDB uses a string-encoding of 128 bit UUIDs (basically hashed integers). The string format looks like this: "HHHHHHHH-HHHH-HHHH-HHHH-HHHHHHHHHHHH" where every 'H' is a hexadecimal digit of the 128 bit integer. The characters 0-9 and a-f (lower case) are used. If you want to generate such UUIDs from an existing integer,...
There are many different way to tackle this problem. Installing RethinkDB in Windows First, you can, in fact, get RethinkDB working on Windows. You just have to use a virtual machine. Here is a tutorial on how to install RethinkDB on Windows. That being said, it's not as simple to...
I think getNearest returns a stream of the format {distance: <number>, doc: <doc>}, so you probably have to replace checkinId in eqJoin with r.row("doc")("checkinId") So something like r.table("places").getNearest(...).eqJoin(r.row("doc")("checkinId"), r.table("checkins")) The reason why nothing is returned is because eqJoin behaves like a SQL inner joins, meaning that if no match is...
You'd have this problem even if the server responded immediately, because the user might disconnect after you've sent the query to the server and before the response has made it back. Unfortunately we can't create the cursor before sending the query to the server because in the general case figuring...
If I properly understood your question, the query you are looking for is (in JavaScript) r.table("x").group("y").count().ungroup().orderBy("reduction") In Python/Ruby, it would be r.table("x").group("y").count().ungroup().order_by("reduction") ...
Yes, if you chain a between command with an orderBy one (using the same index), it will be executed in an efficient way.
javascript,database,rethinkdb,deepstream.io
I have to admit that I don't think that RethinkDB supports a concept of nested tables. If you'd like to create a table per chat, just use a character other than the splitChar, e.g. 'chat-idle_banter/<recordName>'
Try: r.table('events').group('userId').map(function(event) { return r.object(event('action'), 1); }).reduce(function(a, b) { return a.merge(b.keys().map(function(key) { return [key, a(key).default(0).add(b(key))];}).coerceTo('object')); }) ...
If you use the init script, the RethinkDB server will be run as user rethinkdb by default. It looks like it doesn't have permission to write to /home/mofax/rethinkdb. Unless you've changed the user in the RethinkDB instance configuration file, I think you just need to run $ chown -R rethinkdb:rethinkdb...
Just chain them together! r.db('items').table('tokens') .filter(r.row('valid_to').gt(r.now())) .filter(r.row["processed"] == False) And you can keep chaining stuff after that. ...
Assigning to a local variable like that (result in your case) doesn't work with the way RethinkDB's driver builds up the query object to send to the server. When you write code like above, you're storing a literal string in the local variable once on the client (rather than once...
As mlucy said, you can use filter in conjunction with or and contains to get all documents with a specific tag. If you have an array of tags, you can do the following too: var tags = ["young", "cricket"]; r.table('posts').filter(function (row) { return r.expr(tags).contains(function (value) { return row("tags").contains(value) }); })...
ended up with the following: .indexCreate('myItems', r.row('mylists')('items').concatMap(function (x) {return x})('name'), {multi:true}); ...
node.js,performance,resources,database-performance,rethinkdb
My guess is that the bottleneck here is the disk system, but not its throughput. What's more likely is that writes are happening in chunks that are too small to be efficient, or that there are delays due to latency between individual writes. It's also possible that the latency between...
First: The bad news: deepstream.io is purely a messaging server - it doesn't look into the data that passes through it. This means that any kind of querying functionality would need to be provided by another system, e.g. a client connected to RethinkDB. Having said that: There's good news: We're...
Because this is JavaScript and the code is asynchronous, your dbList query does not have access to your connection variable. You need to put your dbList code inside the connect callback. module.exports = function(r, config) { var connection = null; r.connect(config.rdb, function(err, conn) { if (err) throw err connection =...
Your question is not 100% clear to me so I'm going to restate the problem to make sure my solution gets sense. Problem Get all documents where the message property is of type object or the message property is a string and matches a particular regular expression (using the match...
There is special handling for non-existing fields (default: false) so I guess it is best to rewrite the query to not call .eq() on a missing field. You could either check id first: .filter(r.row('id').eq('aaa').or(r.row('yyy').eq('aaa'))) or maybe by setting the default behavior directly on the operation with missing fields: .filter(r.row('yyy').eq('aaa').default(false).or(r.row('id').eq('aaa'))) BTW:...
I'll reformulate the question just to make a bit more clear: Problem Given a specific document with geodata, I want to also return the four nearest locations to that location. Solution First, make sure that you've creating the geo index in the table you want to query: r.table('dealer_locations').indexCreate('location', { geo:...
Ok, I found problem. On external sever I have to run RethinkDB with param: --canonical-address DEDICATE_IP_SERVER:29015 After that I can connect from home with join param....
In JavaScript, (...) is the field selector. r.db("main").table("countries").limit(1)('Country')('WebName') ...
There are two ways to go: Open and close one connection per express request Use a connection pool - rethinkdbdash is pretty good at that in this case. It has a connection pool and automatically take care of connection (you never see a connection with rethinkdbdash). ...
This would be easier if lanes were an array rather than an object, but with the current document structure a query like this should do it: r.db('libstats').table('flowcells').merge(function(flowcell) {return { 'lanes': flowcell('lanes').keys().map(function(n) { return r.expr([n, flowcell('lanes')(n).merge(function(lane) { {'library_name': r.db('libraries').table('libraries').get(lane('library_id'))}]).coerce_to('OBJECT')})};}) ...
You can't use .getIntersecting with .changes, but you can write essentially the same query by adding a filter after .changes that checks if the loc is within the circle. While .changes limits what you can write before the .changes, you write basically any query after the .changes and it will...
Sorry for the delayed reply I dont check StackOverflow as often as I should. GoRethink actually does offer support for Changefeeds. Unfortunately the documentation is currently a bit lacking and I hope to work soon, until then I recommend having a look at the tests. Hopefully that should give you...
node.js,express,jade,rethinkdb
I personally use sailsjs for node development, so I'm not knowledgeable on express specifically....but... You need the rethinkdb adapter for starters... var r = require('rethinkdb'); Then perhaps you could change your home page route to something like this... app.get('/', function(req, res) { r.connect({host: 'rethinkdb_server_ip', db: 'blogdb'}, function(err, conn){ r.table('entries').run(conn, function(err,...
python,multithreading,concurrency,nosql,rethinkdb
RethinkDB definitely works when accessed concurrently from multiple threads or clients. The Python driver should work fine on multiple threads as long as you open a separate connection for each thread.
Currently there's no way to do this; if someone has access to your cluster, they have access to all the databases in it.
Currently, you have no index in your query, which means the database has to go through every single document filtering out by the artist. You can create an index for the nested artist name property using the indexCreate command: r .db("discogs") .table("releases") .indexCreate('artistName', r.row('release')('artists')('artist')('name')); After that, you can just get...
The problem is that this test is asynchronous, and you're treating it as a synchronous test. You need to do the following: module.exports = { 'setup()': function(beforeExit, assert) { var success; db.setup().then(function(){ success = true; }).catch(function(err){ success = false; assert.isNotNull(err, "error"); }); beforeExit(function() { assert.isNotNull(undefined, 'Ensure it has waited for...
You can use the Result instance for WriteResponse following the example in the docs for insert res <- runDB $ table "articles" # insert doc if writeResponseErrors res == 0 The R.! operator is meant for constructing queries of type ReQL, not for examining results....
I would recommend making both fields always be arrays, even if the arrays sometimes only have a single value. If you do that, you can do this with concat_map: row('a').concatMap(function(a){ return row('b').map(function(b){ return a.add('-').add(b); }); }); If you want to continue using a mix of single values and arrays, you...
Currently there's no optarg you can put there to get that, but in the 2.2 release you'll be able to use the include_initial optarg for that: https://github.com/rethinkdb/rethinkdb/issues/3579 .
I think the main problem here is that you don't need a multi index. Understanding Multi Indexes multi because the same 2-pair might correspond to multiple documents With any secondary index, the index presumes that the value of that property (in this case, the pair of values) corresponds to multiple...
Here's a way to do it: // Group by user key r.table('30400911').group(r.row('user')('key')) // Only get the product info inside the reduction .map(r.row('product')) .ungroup() .map(function (row) { return { user: row('group'), // Group by name products: row('reduction').group('name').ungroup().map(function (row) { return { name: row('group'), // Convert array of tags into key value...
RethinkDB doesn't just run your function on the server, it calls your function once to build up a predicate. (See http://rethinkdb.com/blog/lambda-functions/ .) You can build up the predicate you want and return it like so: r.db('test').table('monstertrucks').filter(function(item) { var pred = r.expr(true); arr = [{attr: "speed", val: 5}, {attr: "power", val:...
I don't quite understand what you're going for, but does this do what you want? If not, what do you want to be different in the output? r.table("contacts") .filter({"Type": "Agent","ContactDescription" : "CONDO"}) .hasFields("CorporationName") .group("CorporationName") .ungroup() .merge(function(row){ return {count: row('reduction').count()}; }) .orderBy(r.desc('count')) ...
If you have a list of objects you want to replace, something like this should work: r.expr(myArrayOfDocuments) .forEach(function(row) { return r.table('my_table').get(row('id')) .replace(row); }) .run(conn, callback); This assumes your primary key is id, but if you want a more generic solution, you can replace id with r.table('my_table').info()('primary_key'). The reason the query...
node.js,stream,rethinkdb,piping
You might want to check out the third-party driver RethinkDB Dash which has writeable streams. The official driver doesn't implement the stream interface currently, but we may be doing it in the near future
This is how you retrieve all the documents without the field single or with the field single being null. r.table('data').filter(function(doc) { return doc.hasFields('single').not(); }).run().then(...).error(...) If you just want undefined fields: r.table('data').filter(function(doc) { return doc('single').ne(null); }).run().then(...).error(...) This works because if single is not a key, an error will be returned in...
Short answer: There's no limit. The 100.000 elements for the buffer is if you do not retrieve changes from the cursor. The server will keep buffering them up to 100.000 elements. If you use each, you will retrieve the changes as soon as they are available, so you will not...
mongodb,redis,couchdb,rethinkdb,foundationdb
With the default RethinkDB configuration, you won't lose any writes you've received responses for even if the server is restarted.
javascript,database,time,rethinkdb
Ordering the DB in descending order according to the timestamp solved my problem. r.db('deepstream').table('chat').orderBy({index: r.desc('ds_id')}) .filter(function (message) { return message("_d")("timestamp").lt(<timestamp>); }).limit(X); ...
python,python-2.7,rethinkdb,rethinkdb-python
Usually when printing to stdout, the output will be buffered and then flushed when it grows large enough. Try adding sys.stdout.flush() after the print statement, and the output should be smoother.
If you have the Python driver you can do the following: echo -e 'import rethinkdb as r; \nr.connect("localhost", 28015).repl() \nr.db_create("NAME_OF_YOUR_DATABASE").run()' | python To install the Python driver, you can just use pip: pip install rethinkdb This might not be the best way of doing this, but it might be the...
It looks like you are missing the m4 package. On Ubuntu, you can install it by doing sudo apt-get install m4 ...
It seems like your second query (the insert) does not have access to the conn variable. For this to work, you'd need to put the event reader code inside the callback for your connect function. var nsq = require('nsqjs'); var r = require('rethinkdb'); var nsqdd = (process.env.NSQD_RETH || "localhost:4161").split(","); var...
You can batch your insert by passing an array of documents to insert insert([doc1, doc2, doc3, doc4]) You can also use multiple connections, and have one query at most run per connection -- you may be interested in the rethinkdbdash package if you don't want to manually do that (it...
That should work, but it's probably not super efficient r.table('x').filter(function(doc) { return doc('description').coerceTo("STRING").match("commonwealth") }) ...
Naive Way You can actually do that in just one query by doing the following: r.table('forums') .get(ID) .merge({ 'comments': r.table('posts').filter({ 'forumID': ID })('id').coerceTo('array') .do(function (postsIdsArray) { return r.table('comments').filter(function (row) { return postsIdsArray.contains(row('id')); }) }).coerceTo('array') })('comments') Better Way If you're executing this operation a lot and you have the relationship on...