From 0ea5fc66924303d1bf73ba283a383e2aadee02f2 Mon Sep 17 00:00:00 2001 From: neodarz Date: Sat, 11 Aug 2018 20:21:34 +0200 Subject: Initial commit --- pipermail/nel/2001-February/000252.html | 104 ++++++++++++++++++++++++++++++++ 1 file changed, 104 insertions(+) create mode 100644 pipermail/nel/2001-February/000252.html (limited to 'pipermail/nel/2001-February/000252.html') diff --git a/pipermail/nel/2001-February/000252.html b/pipermail/nel/2001-February/000252.html new file mode 100644 index 00000000..a3922258 --- /dev/null +++ b/pipermail/nel/2001-February/000252.html @@ -0,0 +1,104 @@ + + + + [Nel] NeL Network Engine + + + + + + +

[Nel] NeL Network Engine

+ Olivier Cado + cado@nevrax.com
+ Thu, 22 Feb 2001 16:41:12 +0100 +

+
+ +
We are very pleased that you are interested by NeL and our project.
+Thanks to free software, we are now working in cooperation with the
+community. So let's discuss some network issues.
+
+Here is the purpose of this message:
+- Present the future of the NeL Network Engine
+- Ask for your input on a couple of points
+
+At present, the Nel Network Engine is single-threaded. We plan to
+rewrite the engine using multi-threading. The library will be made up of
+five layers:
+- Layer 0: socket wrapper (roughly, present CBaseSocket + listening
+socket functionalities)
+- Layer 1: multiple socket multi-threaded I/O mechanism
+- Layer 2: adapted to CMemStream (allows serialization)
+- Layer 3: adapted to CMessage (contains type information)
+- Layer 4: using callbacks (as presently used in the services and
+provided by CMsgSocket)
+
+The main features of Layer 1 are as follows:
+
+External view:
+The user programmer will be able to send data (A), to check if some data
+has been received (B), and if so to get a data block from the receive
+queue.
+
+Implementation:
+(A) When the user requests to send data, his data block is put into a
+send queue. The actual sending is triggered off by a time flush trigger,
+a size flush trigger or an explicit flush trigger (at the user's
+demand). Let's say the queue control is executed in the main thread, in
+an update() method, called evenly.
+(B) Each connection is handled by a separate thread that sleeps while
+not receiving data so that no CPU time will be used if nothing is
+received on a particular socket. When incoming data is actually
+received, it is put into a global receive queue (synchronized with a
+mutex of course) and popped when the user requests to receive a block.
+
+This implementation still raises a few questions:
+(A) If no buffer space is available within the transport system to hold
+the data to be transmitted, the actual sending will block. Has anybody
+come across this case ? When does this happen in practice ?
+(B) As we are building a *massively* multiplayer game, we expect to have
+a great number of connections (even if all clients won't be connected on
+the same machine), therefore a great number of threads. Does anybody
+know the scale limits on Linux systems (and on Windows BTW), i.e. the
+optimum and maximum thread numbers per process and per system ?
+
+I'm sure a lot of you are great Linux specialists, so you probably have
+an idea about this issues.
+
+Thanks.
+Olivier Cado
+--
+http://www.nevrax.org
+
+
+ + + + + + + +
+

+ -- cgit v1.2.1