c#,.net,vb.net,producer-consumer,blockingcollection
Just design your Producer and Consumer classes to accept the BlockingCollection as a constructor parameter. Then, wherever you instantiate these classes, and perhaps even more than one of each, just make sure to pass in the same instance of the BlockingCollection to all producers and consumers. Once you've done that,...
java,multithreading,producer-consumer,blockingqueue
I think your solution is simple and elegant, and think you should keep it with some modifications. The modifications I propose are synchronization. Without it, thread interference and memory consistancy errors can (and very often does) occur. On top of that, you can't wait or notify on a lock you...
message-queue,producer-consumer,apache-kafka,kafka-consumer-api
Depends on what you mean by "blocked". If you mean "are existing connections closed when rebalance is triggered" then the answer is yes. The current Kafka's rebalancing algorithm is unfortunately imperfect. Here is what is happening during consumer rebalance. Assume we have a topic with 10 partitions (0-9), and one...
c#,multithreading,concurrency,producer-consumer,blockingcollection
In case you end up using my comment Put a dummy item in the collection that wakes up the producer ...
c#,multithreading,c#-4.0,concurrency,producer-consumer
I think I follow what you're asking, why not create a ConcurrentBag and add to it while processing like this: while (!_cancelTasks) { //BlockingCollections with Paralell ForEach var bc = _concurrentCollection; var q = new ConcurrentBag<ScannedPage>(); Parallel.ForEach(bc.GetConsumingEnumerable(), item => { ScannedPage currentPage = item; q.Add(item); // process a batch of...
If your struct is global, it will get updated before Z threads are created. IE. The variable remp->rop will be incremented Z times before the first thread is even executing.
rabbitmq,amqp,producer-consumer
Publisher and producer normally doesn't interact. It is by AMQP protocol design. For example, specific message consuming may be done long time after it was published, and there are no sense to let producer up and running for a long time. Another example is when publisher send to broker one...
python,multithreading,producer-consumer
I added some notes on your code, which has some problems: def run(self): while True: customer=Customer() self.mutex.acquire() if self.queue.full(): print "Queue is full : Cant Enter Critical Section" # NOTE 1: you have to release mutex here: self.mutex.release() elif not self.queue.full(): # NOTE 2: other threads can enter after releasing...
There is nothing wrong with making this conditional: var stats chan []string // Don't initialize stats. func main() { options() go produce(readCSV(loc)) go process() if flag { stats = make(chan []string, 1024) go statistics() // only on flag } <-done } func produce(entries [][]string) { regex, err := regexp.Compile(reg) if...
algorithm,operating-system,producer-consumer
OK I got it, the problem is in the semantic of the variables. The next_produced variable will fill a full buffer directly, so the buffer[] is an array of '''BUFFER_SIZE''' buffers, each loop in the producer will produce a next_produced item which will fill directly an entire buffer (element) int...
c++,multithreading,pthreads,posix,producer-consumer
The author mentionned at the end of its article under Semantics of wait() and notify() that notify(cv) wakes up all threads that are currently waiting on that cv So your understanding of wait is incorrect, notify is meant to be pthread_cond_broadcast in posix. Furthermore the documentation of pthread_cond_signal stipulates The...
c,multithreading,mutex,shared-memory,producer-consumer
So the short answer is: you can use pthread mutex mechanism so long as pthreads knows about your particular system. Otherwise you'll need to look at the specific hardware/operating system for help. This long answer is going to be somewhat general because the question does not provide a lot of...
The clock has ~15ms resolution. 1ms is rounded up to 15. That's why 1000 items take ~15 seconds. (Actually, I'm surprised. On average each wait should take about 7.5ms. Anyway.) Simulating work with sleep is a common mistake....
java,multithreading,producer-consumer
Instead of MyObject o = queue.poll(); try MyObject o = queue.take(); The latter will block until there is something available in the queue, whereas the former will always return immediately, whether or not something is available....
java,java-8,producer-consumer,completable-future
There is no point in dissolving two directly dependent steps into two asynchronous steps. They are still dependent and if the separation has any effect, it won’t be a positive one. You can simply use List<String> hosts = fetchTargetHosts(); FileDownloader fileDownloader = new FileDownloader(); for(String host: hosts) CompletableFuture.runAsync(()-> new HostDownloader(host).get().forEach(fileDownloader));...
java,multithreading,synchronization,producer-consumer
I solved the problem by using the System.out.println(...) statement in get() and set() methods of GetSetItem class and removing System.out.println(...) from corresponding producer and consumer class. As: get() Method of GetSetItem.java public synchronized void set(int item, int number) { while(available) { try { wait(); } catch (InterruptedException ie) { System.err.println("Interrupted:...
If you dont need highly scalable and configurable distributed components with guaranteed delivery over multiple server crashes and automatic enrollment/commit/rollback in various transactions along with dead letter queue management, then yes a simple Queue with custom MessageProducer and MessageConsumer is probably a correct approach. The golden rule: if the complexity...
c++,c++11,boost,boost-thread,producer-consumer
Yes this will work. In particular Basically I am not sure if the future associated with the packaged_task can be considered ready when the readers are notified by the function registered with boost::this_thread::at_thread_exit They can. The packaged_task is the thread function. By the time the thread_proxy implementation from Boost Thread...
c#,oracle,exception,task,producer-consumer
You should never dispose a transaction or connection when you use using scope. Second, you should rarely rely on exception based programming style. Your code rewritten below: using (var connection = new OracleConnection(connectionString)) { using (var transaction = connection.BeginTransaction()) { connection.Open(); //...log-2... using (var cmd = connection.CreateCommand()) { foreach (var...
java,multithreading,producer-consumer
It's limiting to use a fixed number of threads. My PC has (only) 8 cores, your intensive sounding app is not going to use half of them, indeed probably only the consumer is the intensive one, so maybe 12.5%. You'll have to have several of each thread to get the...
c#,.net,multithreading,task,producer-consumer
Based on the comments I've posted a second and third version: http://codereview.stackexchange.com/questions/71182/producer-consumer-in-c-with-multiple-parallel-consumers-and-no-tpl-dataflow/71233#71233
java,concurrency,queue,producer-consumer
As mentioned in this post by Alex Miller TransferQueue is more generic and useful than SynchronousQueue however as it allows you to flexibly decide whether to use normal BlockingQueue semantics or a guaranteed hand-off. In the case where items are already in the queue, calling transfer will guarantee that all...
multithreading,graph,buffer,deadlock,producer-consumer
If the application is in the state where one consumer is in the monitor and the other two threads are waiting for it, just signalling one thread risks waking up the other consumer which will get blocked as the buffer now is empty. And since that blocked consumer does not...
java,multithreading,concurrency,thread-safety,producer-consumer
Your code is broken is so many ways that it’s hard to decide, where to start. At first, you consumer is entirely enclosed in a while(!busy){ loop so once busy becomes true, it will exit the loop instead of starting to consume items. But since you are accessing the busy...
If you're just reading from a microphone and want two WaveOut's to play it, then the simple option is to create two BufferedWaveProviders, one for each WaveOut, and then when audio is received, send it to both. Likewise if you were playing from an audio file to two soundcards, the...
Since the queue will be accessed by all producers and consumer, the access needs to synchronize. But if the tree is used by the single consumer, why lock it? Locking is unnecessary and will cause extra overhead.
c,pthreads,condition,mutex,producer-consumer
If you want to use the PTHREAD_XXX_INITIALIZER macros you should use them in the variable declaration. Also use PTHREAD_COND_INITIALIZER for condition variables: // Locks & Condition Variables pthread_mutex_t lock = PTHREAD_MUTEX_INITIALIZER; // Lock shared resources among theads pthread_cond_t full = PTHREAD_COND_INITIALIZER; // Condition indicating queue is full pthread_cond_t empty =...
python,multithreading,queue,producer-consumer
I'd call that a self-latching queue. For your primary requirement, combine the queue with a condition variable check that gracefully latches (shuts down) the queue when all producers have vacated: class SelfLatchingQueue(LatchingQueue): ... def __init__(self, num_producers): ... def close(self): '''Called by a producer to indicate that it is done producing'''...
java,multithreading,client-server,producer-consumer
You only need to notify when adding to an empty queue, because dequeuers are only waiting on an empty queue. Similarly you only need to notify when dequeueing from a full queue, because enqueuers are only waiting on a full queue....
linux,operating-system,semaphore,producer-consumer
The pseudo code implementation would be something like this. Get the semaphore ID in the child process using same key, either using ftok or hardcoded key, then obtain the current value of semaphore then perform appropriate operations. struct shma{readindex,writeindex,buf_max,char buf[5],used_count}; main() { struct shma* shm; shmid = shmget(); shm =...
java,multithreading,design-patterns,concurrency,producer-consumer
I can suggest the following tutorials: Java Concurrency / Multithreading Tutorial Java concurrency (multi-threading) - Tutorial Java Concurrent Animated Videos in YouTube: Concurrency (Multithreading) Concepts in Java Concurrency Without Pain in Pure Java Java Tutorial : How to use Threads(Concurrent),Synchronized Keyword and Semaphores - ...
c#,async-await,producer-consumer
I'm not sure if answering my own questing is OK, so I won't flag it as answer, maybe someone comes up with a better solution :P First of all here is the producer code: static async Task StartAsync() { using (var queue = new SendMessageQueue(10, new SendMessageService())) using (var timeoutTokenSource...
c,multithreading,condition,producer-consumer
In practice, you should not use mutex to solve a producer/consumer or reader-writer problem. It necessarily does not give rise to deadlocks but might lead to starvation of either producers or consumers. I used a similar approach to code a reader/writer lock. You could check it out: https://github.com/prathammalik/OS161/blob/master/kern/thread/synch.c...
c,multithreading,concurrency,pthreads,producer-consumer
Incorrect producer Consider your producer: sem_wait(&producer_semaphore); pthread_mutex_lock(&write_index_mutex); my_write_index = write_index; write_index = (write_index + 1) % BUFFER_SIZE; total_produced_sum += producer_id; pthread_mutex_unlock(&write_index_mutex); // Producer #1 stops right here. buffer[my_write_index] = producer_id; sem_post(&consumer_semaphore); Let's suppose the following sequence happens: Producer #1 runs until the comment above and then stops. Suppose its my_write_index...
java,concurrency,producer-consumer
By no means, ArrayBlockingQueue is better. When using an Exchanger, one of communicating threads would always be blocked, with an ArrayBlockingQueue, blocking occur only producer's and consumer's speeds are unbalanced. Using ArrayLists makes sense to reduce contention, and can be used together with ArrayBlockingQueue too. UPDATE Exchanger.exchange() Waits for another...
java,collections,garbage-collection,out-of-memory,producer-consumer
Problem faced in above case is solved by limiting size of Blocking Queue. Data submitted to queue was definitely of high size, but in such case queue size used should be minimal or equal to number of thread count. This way we can limit size of idle java objects. Objects...
python,rabbitmq,rpc,producer-consumer,pika
Ok i knew what is going on. In worker.py, I misspell ch.basic_ack(delivery_tag=method.delivery_tag) to ch.basic_ask(delivery_tag=method.delivery_tag) My God!T.T...
c,multithreading,producer-consumer
Use semaphore to synchronize (signal the other thread when one has executed the iteration): Semaphore s1, s2; Producer: P(&s1) // do your operations print() V(&s2) //unlock the consumer thread Consumer: P(&s2) // do your operations print() V(&s1) //unlock the producer thread Just make sure you do it in such a...
c++,boost,vector,producer-consumer,lock-free
You're basically proposing to use 2x(cores-1) spsc_queues in their intended fashion. Yes, this would work. I don't see how you would obviously handle the responses ("incoming queues") on the main thread though. Bear in kind there is no "waiting" operation on the incoming queue, and neither would you want one...
java,producer-consumer,arraydeque
ArrayDeque is not thread safe. You'll have to guard it with a lock (synchronized keyword or read/write lock) in order to access it safely from different threads. Another option is to use a thread safe deque implementation, preferably a blocking one (e.g. LinkedBlockingDeque), which will also allow you to avoid...