It does make sense yes. Given that independent events are published to potentially multiple subscribers and each subscriber has its own ideas where and how to store it I’m not sure that looking at it from a performance point is a good idea. What is the harm of storing it one by one?
The problem is that once you start introducing something that batches things either in memory or in storage this solution becomes complex and error-prone. In worst cases you might even expose yourself to message loss scenarios. Because the batching solution needs to get the messages, buffer them and thus ackknowledge the individual transport transactions and then later store it. There are also quite a few scenarios in here that need to be taken into account like can you really assume a continues stream of events coming in? So your buffering scenario needs to be time window based plus number of item based. Then there are other factors like suddenly what previously have been individual transactions that can fail they are bundled together into one storage transaction. How would you retry that? Suddenly you are creating a transport in memory and need to reimplement parts of the queuing system including the failure handling
That being said if you have proof that you really need this to do you could use a saga to write the state against and then batch them all together with the saga. Then the saga sends out a local command that writes everything into the database. With that you could batch and have the posibility to retry the inserts. But still every saga update would mean a storage update so I’m not sure how much you’d gain unless that specific endpoint would be using a more performant persister.
Can you elaborate a bit more what makes you drive towards batching on the receiver side?