With trading speeds constantly growing more and more traders go out of business if they can't keep up. Low latency business is known to be expensive and complex. With our solutions you will change your opinion, don't pay absurd fees for the privilage of developing and maintaining your new system yourself.
Low latency trading is not trivial but it is not rocket science either. Latency goes up in two ways: 1. Physics - you are further away from the exchange (each mile of cable adds 5 microseconds of latency) and 2. Technology (how fast is your hardware and software in processing signals).
Many providers focus on one or the other and make you work hard on top of it.
One common example is when technology is fast but the algo is physically far separated form the exchange. The separation could be via distance or number of hops / servers. If your vendor has a feed handler or risk control reside on a separate machine, congratulations - you can wave bye bye to low latency (at least extra 10 mics overhead guaranteed).
If, on the other hand, you decide to bring it in-house and execute your strategy on your own machine that communicates directly with the exchange, your firm will absorb all monthly costs (co-loction space, marked data trasnmission fees, system production management etc.) plus you either need your own skilled (expensive) developers or pay asinine fees to a software vendor on top of it all.
Is there a better way?
To find that answer, we asked ourselves what NOT to do to make DMA fastest while still accessible to many.
Solving first part of added latency is simple - the algorithms must run on the same machine that connects to the exchange and has the best hardware, we do not add extra separation between you and the matching engine - physics is physics.
On the technology end of things, we built it with a few things in mind: 1.Data Processing, 2.Resource Sharing and 3.Privacy.
We use the latest technology and programming methods to ensure that:
1. Operating system (kernel) is not involved in any of those steps. We use propriateray lock free and cache efficient programming methods.
2. Your algo has direct access to data in shared memory, no added latency of data transimssion.
3. You only pay for dedicated resources your algo needs, while at the same time being isolated and protected.
That's it, its common sense and we use it.