You must specify your columns: LOAD DATA LOCAL INFILE '/tmp/jobs.csv' INTO TABLE avjobs (Job_Name, Job_Seq, Job_Date, Start_Time, End_Time, Runtime, Status) FIELDS TERMINATED BY '|' LINES TERMINATED BY '\n'; because your import file doesn't contain your Job_ID, see manual LOAD DATA INFILE 'persondata.txt' INTO TABLE persondata; By default, when no column...
c#,excel,insert,bulkinsert,azure-documentdb
Update (4/8/15): DocumentDB just released a data import tool, which supports JSON files, MongoDB, SQL Server, and CSV files. You can find it here: http://www.microsoft.com/en-us/download/details.aspx?id=46436 Original Answer: DocumentDB doesn't support bulk importing from excel files just yet... However, you can leverage DocumentDB's store procedures to make bulk import a bit...
Maybe this can help, change last but one line to: bulk_through.append(ThroughModel(contactgroup_id=gr.pk, contacts_id=item.pk)) seems to me that variables are mixed....
javascript,mongodb,bulkinsert,node-mongodb-native
Based on your comment, your find query is not using an index which will prompt a full collection scan. Add an index to your collection that can be used by find(query); use explain() to confirm it's being used....
INSERT INTO userexternalid (userid, facebookid, externalid) SELECT id AS userid, facebookid, externalid FROM user; ...
database,insert,bulkinsert,orient-db
Well, seems like the problem wasn't with Orient itself, but the glassfish server where the program was running. Glassfish terminated the connection with Orient for some reason. Anyways, running the program outside the server solved the problem. Thanks!
sql-server,bulkinsert,sqlbulkcopy
Insert you first table with the new keys (leave the pk blank on insert) and make a (temporary) col in the DB1 table for the old key. lookup (join) your second insert on the old key column to get your new fk. When your done delete the old key column...
python,mysql,multiprocessing,bulkinsert,peewee
You can rely on the database to enforce unique constraints by adding unique=True to fields or multi-column unique indexes. You can also check the docs on get/create and bulk inserts: http://docs.peewee-orm.com/en/latest/peewee/models.html#indexes-and-unique-constraints http://docs.peewee-orm.com/en/latest/peewee/querying.html#get-or-create http://docs.peewee-orm.com/en/latest/peewee/querying.html#bulk-inserts ...
if(PQputCopyData(conn,buffer,400) == 1) What's wrong is passing 400 instead of the actual size of the contents in buffer, making it send unallocated garbage after the real data. Use strlen(buffer) instead. Also you want each line to finish with a newline, so buffer should be : const char *buffer =...
mysql,left-join,bulkinsert,right-join
OPTION 1 INSERT INTO employees (forename, surname, employersId, custom_corpore_headOffice, contractId) SELECT firstname, surname, employeenumber, dob, store, contractId FROM batch_import WHERE NOT EXISTS (Select 1 From employees where batch_import.employeenumber = employees.employersId AND batch_import.contractId = employees.contractId) OPTION 2 INSERT INTO employees (forename, surname, employersId, custom_corpore_headOffice, contractId) SELECT firstname, surname, employeenumber, dob, store,...
sql-server,performance,bulkinsert
400 M per 3 hours means 40000 inserts per second which is already sounds good for one application running on 8 core machine. Have you tried running more instances of this app from different machines? Does it helps? If yes then just scale it, if no then i would check...
If you have thousands of users, it doesn't matter to generate thousands of SQL querys, and send it to your MySQL server. But things will change if you have millions of users. One affordable choice is, write a notification to a table, for example, system_notifications. Display not only notifications but...
java,android,sqlite,bulkinsert
You don't execute the text file, you read it and insert the rows into the database. You can store it as CSV or JSON, so you just need to look up (if you don't already know): How to package a text file with your apk How to open such a...
Please take a look at this document. At every iteration, Doctrine creates one insert ... query for you, after all, when you call flush(), Doctrine sends all those insert queries to db at one time inside a loop(using a mechanizm like foreach(queries as query) { run->query.. }) In your case...
c#,.net,mongodb,bulkinsert,mongodb-csharp
Yes. You can insert in bulk: Bulk Inserts in MongoDB IEnumerable<WriteConcernResult> results = collection.InsertBatch(records) That will cut on most of the round trips to the DB which should speed things up....
You look like you are concatenating the SQL string incorrectly try something like: SET @sqlCommand = 'BULK INSERT '[email protected]+' FROM '''[email protected] + @tableName+''' If you use special characters within your tab;e name you can also try: SET @sqlCommand = 'BULK INSERT ['[email protected]+'] FROM '''[email protected] + @tableName+''' You can verify you...
So, according to the documentation: if you want skip any column except for the last you need to create a view that exposes the subset of columns you want tp import into. The view should look like create view v1 as select id, n1, n2 from t1 and then you...
The above usage of bulk api is wrong. bulk takes as input a hashref where the body is a reference to array of actions and documents For example something on these lines should work: $action = {index => {_index => 'my_index', _type => 'blog_post', _id => $ifileid}}; $doc = {filename...
Sounds like you have found a bug! I will look into it. https://sqlcebulkcopy.codeplex.com/workitem/25951 Fix is: ALTER TABLE [MYtable] ALTER COLUMN [Id] IDENTITY (1001,1); ...
Part of the "exact" difference is in the naming of the methods in that one is "Orderred" and the other is "Un-Orderred". But there is a little more to it than just that. Orderred: Of course executes the statements in the batch in the same order they are created in....
php,zend-framework2,bulkinsert,insert-into
There is no method for bulk insert in ZF (Zend\Db), so you have to write a method for that. Zend\Db\Sql\Sql multiple batch inserts The bulk insert is supported in MySQL but not in all databases... Check discussion on link. You can execute raw SQL query for this case: $parameters =...
Probably you are looking for something (google "Oracle table functions"): SQL> create table my_tab(pin VARCHAR2(7)) 2 / SQL> declare 2 arr INJURED_PEOPLE := INJURED_PEOPLE(); 3 begin 4 arr.extend(2); 5 arr(1) := INJURED_PERSON('APIN',null,null,null,null,null,null); 6 arr(2) := INJURED_PERSON('BPIN',null,null,null,null,null,null); 7 INSERT INTO my_tab SELECT x.PIN FROM table(arr) x; 8 end; 9 / SQL>...
c#,entity-framework,nested,bulkinsert,relationships
I had bad experience with huge context save. All those recommendations about saving in iterations by 100 rows, by 1000 rows, then disposing context or clearing list and detaching objects, assigning null to everything etc etc - it is all bullshit. We had requirements to insert daily millions of rows...
sql,sql-server,tsql,bulkinsert
@tommy_o was right about using TABLOCK in order to get my information loaded. Not only did it run in about an hour and a half instead of nine hours, but it barely increased my log size. For the second part, I realized I could free up quite a bit of...
c#,database,excel,bulkinsert,azure-documentdb
Update: As of 4/8/15, DocumentDB has released a data import tool, which supports JSON files, MongoDB, SQL Server, and CSV files. You can find it here: http://www.microsoft.com/en-us/download/details.aspx?id=46436 In this case, you can save your Excel file as a CSV and then bulk-import records using the data import tool. Original Answer:...
javascript,node.js,rest,couchdb,bulkinsert
You can use the bulk document API to insert (and even update) multiple documents.
mysql,sql,bulkinsert,load-data-infile
I don't know what version of MySQL you are using but a quick Google search found possible answers to both your questions. Below are excerpts from the MySQL 5.1 Reference Manual: The file name must be given as a literal string. On Windows, specify backslashes in path names as forward...
mysql,sql,bulkinsert,load-data-infile
Follow below steps: Step1: Truncate table after disabling foreign key constraint and then again enable- set foreign_key_checks=0; truncate table mytable; set foreign_key_checks=1; Step2: Now at the time of bulk uploading select columns in table only those are in your csv file means un-check rest one (auto id also) and make...
sql,sql-server,insert,bulkinsert
You can just add the values in your SELECT INTO: INSERT INTO VoterElection (voterID, elecID, voterType, votingStatus) SELECT userID, 1, 'Normal', 'Active' FROM Voter WHERE lgDiv=3; You can change the values 1, 'Normal', 'Active' to whatever you want....
python,sql-server-2008,authentication,ssms,bulkinsert
Could the MS SQL server 2008 possibly be on a different security group (or have different settings) than the shared drives, where the file is located? Because the bulk insert operation is run on the MS Management studio server side, it might not have access to the file, the 'access...
sql-server,file,csv,format,bulkinsert
Here are few things which I noticed are wrong in your code above. Id is an INT type however data in csv is A1,A2... for this column In your format file, The database column ordering starts from 2 and not from 1. You don't need a format file, you can...
mysql,cursor,bulkinsert,last-insert-id
https://dev.mysql.com/doc/refman/5.0/en/getting-unique-id.html LAST_INSERT_ID() will return the first id from a multi row insert. So you can just return that value and: INSERT INTO `table1` (`name`, `value`) VALUES('name1','value1'),('name2','value2'); SET @firstid := LAST_INSERT_ID(); SELECT * from table1 where id>[email protected]; ...
The solution ended up being to build a COM object in C# that does the bulk insert and then leveraging that COM object in the VB6 project.
sql,sql-server,csv,bulkinsert,sql-server-2014
You could set the MAXERRORS property to quite a high which will allow the valid records to be inserted and the duplicates to be ignored. Unfortunately, this will mean that any other errors in the dataset won't cause the load to fail. Alternatively, you could set the BATCHSIZE property which...
performance,mongodb,indexing,bulkinsert
When adding new items to a collection, the database will have to keep the index up-to-date. Since the index in MongoDB is a B-Tree by default, that means it will have to insert an item in the tree. While that isn't a particularly expensive operation in the best case, it...
You can use SEQUENCE as an UNIQUE ID generator or try TRIGGER ON INSERT to get a unique ID. EDIT With mysql you can build trigger for every row DELIMITER $$ CREATE TRIGGER adresse_trigger_insert_check BEFORE INSERT ON adresse FOR EACH ROW BEGIN IF NEW.land IS NULL THEN SET NEW.land :=...
c++,bulkinsert,z-order,quadtree
This is untested code, you should double check if it works. Also, this code is almost certainly not portable, and might even be undefined behavior. It's certainly implementation defined behavior at the very least, but it's probably unspecified... I'd have to more carefully read the rules regarding reinterpret_cast to and...
If you're using MongoDB 2.6+ you can do an unordered bulk operation. If the error occurs when doing write operations, MongoDB will continue to process the remaining operations: DBCollection coll = db.getCollection("test"); BulkWriteOperation bulk = coll.initializeUnorderedBulkOperation(); bulk.insert(new BasicDBObject("foo", 1)); bulk.insert(new BasicDBObject("bar", 2)); bulk.execute(); The downside to this approach is that...
csv,sql-server-2008-r2,bulkinsert
Thank you Ross Presser, your answer did lead me in the right direction. Here is what I did in order to get the correct result: DECLARE @SQL nvarchar(max), @FileName nvarchar(200) CREATE TABLE #Temp_FileName ( [FileName] nvarchar(200) ) INSERT INTO #Temp_FileName EXECUTE XP_CMDSHELL 'dir \\FAS-RBGFS01\costec\HRSS\ /b' DELETE FROM #Temp_FileName WHERE [FileName]...
There is absolutely a way to do checks in SqlBulkCopy - by not doing them. DO not insert into the final table (which is better anyway, SqlBulkCopy has serious bad programming on it's locking) but into a temporary table (that you can create dynamically). Then you can execute custom SQL...
sql,many-to-many,bulkinsert,sequelpro
I have not worked with Sequel Pro, so I don't know if I have the proper syntax, but perhaps I can give you the right logic. In MySQL, you can use a select statement to insert new rows into a table. In your case, you want to insert new rows...
multithreading,cassandra,bulkinsert,apache-spark
As you have observed, a single writer can only be used in serial (ConcurrentModificationExceptions will happen if you do not), and creating multiple writers in the JVM fails due to static schema construction within the Cassandra code that the SSTableWriter uses. I'm not aware of any workaround other than to...
ruby-on-rails,variables,activerecord,bulkinsert,bulk
You can do this in your controller. First do what steve klein says, then you can do the following. For example if you want to use a text field to enter the new user names you can do something like in your form: <%= f.label :user_names %> <%= f.text_field :user_names_string,...
You can do it using LAST_INSERT_ID() to get the last auto increment id from Companies table and inserting the same in other table. something like INSERT INTO companies (company_name) VALUES ('test'); SET @last_id_companies = LAST_INSERT_ID(); INSERT INTO contact_persons (contact_name, company_id) VALUES ('test', @last_id_companies); ...
sql,sql-server-2008-r2,bulkinsert
The issue you are having is actually not due to the Row Terminator. I suspect, along with the End of File error, you also saw something similar to the following: Msg 4864, Level 16, State 1, Line 1 Bulk load data conversion error (type mismatch or invalid character for the...
sql-server,stored-procedures,bulkinsert,identity-insert
Alright, this is what we figured out: The reason I'm getting the error about the ID isn't because of the insert procedure, but because of the table type. In ErrorTableType I included the Id along with the other columns. Removing Id from the table type (but keeping it in the...
You create a second table, duplicating the original structure, then dump all the records into it. Rename the tables and you're done. It'd be worth rebuilding any indexes on the new table and you might hit a few snags with mySql getting fussy about re-using old names. INSERT INTO <copy...
c#,oracle,memory-leaks,bulkinsert,sqlbulkcopy
Found the root cause, the exe is running in 32 bit and it has a 1.5G memory limit. Need to change the target platform and replace Oracle.DataAccess.dll to 64 bit version. Also there is an alternative solution: load data in batch so it will not exceed 1.5 G memory limit....
c#,oracle,oracle11g,bulkinsert,odp.net
For someone with the same problem: I ended up creating a stored procedure that does nothing but execute the same SQL I was trying to execute directly. You can then bulk call the procedure and it works fine. So far, this is the only way to do it until the...
c#,oracle,csv,oracle10g,bulkinsert
You can try oracle transaction commit. Maybe this helps to you. using (OracleConnection connectiontodb = new OracleConnection(databaseconnectionstring)) { connectiontodb.Open(); using (OracleBulkCopy copytothetable = new OracleBulkCopy(connectiontodb)) { OracleTransaction tran = connectiontodb.BeginTransaction(IsolationLevel.ReadCommitted); try { copytothetable.ColumnMappings.Add("TBL_COL1", "TBL_COL1"); copytothetable.ColumnMappings.Add("TBL_COL2", "TBL_COL2"); copytothetable.ColumnMappings.Add("TBL_COL3", "TBL_COL3");...