Transfer gigantic DataTables over WCF / .Net Remoting

Capacity improvements package for .NET Remoting / WCF This solution deals with transferring huge DataTables over WCF and .NET Remoting. Imagine an online casino, needing to clear transactions at end of day. The clearing service needs to collect transaction data from the database, populated by all Poker servers, and Blackjack servers. Now the Reporting service needs to receive a huge DataTable, consisting of all online gaming transactions. It is very likely that the transport will fail, because the DataTable is tremendous. When transporting large DataTables between a Server and a remote Client, there are several issues; that stem from .NET serialization. Serialization of a large DataTable is memory thirsty. A large enough DataTable will cause the client to get System.OutOfMemoryException, or System.InsufficientMemoryException. Those exceptions cannot be caught at the server side, as they occur in the innards of the framework code that deals with seriailization and transport. If the DataTable is really large, the framework will throw the server process all together, with brute force. There is no way around that. Another issue is throughput, which is not outstanding, and becomes noticeable when the table gets large. The solution at hand circumvents this problem by partitioning the DataTable to chunks and transferring the chunks in a multi-threaded fashion. In a nutshell, the Server returns an object to the client, through which the client can make concurrent calls back at the server. This approach enables an unlimited-sized table to be transferred between server and client. It also improves throughput by up to a staggering factor of 3.5 (WCF can transfer a table at 3,425 Kb/sec, whereas the current solution boosts throughput to 12,245 Kb/sec), due to the concurrent requests. (Acts like a web accelerator)