1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
|
<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 3.2//EN">
<HTML>
<HEAD>
<TITLE> [Nel] NeL Network Engine</TITLE>
<LINK REL="Index" HREF="index.html" >
<LINK REL="made" HREF="mailto:cado%40nevrax.com">
<LINK REL="Previous" HREF="000249.html">
<LINK REL="Next" HREF="000253.html">
</HEAD>
<BODY BGCOLOR="#ffffff">
<H1>[Nel] NeL Network Engine</H1>
<B>Olivier Cado</B>
<A HREF="mailto:cado%40nevrax.com"
TITLE="[Nel] NeL Network Engine">cado@nevrax.com</A><BR>
<I>Thu, 22 Feb 2001 16:41:12 +0100</I>
<P><UL>
<LI> Previous message: <A HREF="000249.html">[Nel] agent service</A></li>
<LI> Next message: <A HREF="000253.html">[Nel] NeL Network Engine</A></li>
<LI> <B>Messages sorted by:</B>
<a href="date.html#252">[ date ]</a>
<a href="thread.html#252">[ thread ]</a>
<a href="subject.html#252">[ subject ]</a>
<a href="author.html#252">[ author ]</a>
</LI>
</UL>
<HR>
<!--beginarticle-->
<PRE>We are very pleased that you are interested by NeL and our project.
Thanks to free software, we are now working in cooperation with the
community. So let's discuss some network issues.
Here is the purpose of this message:
- Present the future of the NeL Network Engine
- Ask for your input on a couple of points
At present, the Nel Network Engine is single-threaded. We plan to
rewrite the engine using multi-threading. The library will be made up of
five layers:
- Layer 0: socket wrapper (roughly, present CBaseSocket + listening
socket functionalities)
- Layer 1: multiple socket multi-threaded I/O mechanism
- Layer 2: adapted to CMemStream (allows serialization)
- Layer 3: adapted to CMessage (contains type information)
- Layer 4: using callbacks (as presently used in the services and
provided by CMsgSocket)
The main features of Layer 1 are as follows:
External view:
The user programmer will be able to send data (A), to check if some data
has been received (B), and if so to get a data block from the receive
queue.
Implementation:
(A) When the user requests to send data, his data block is put into a
send queue. The actual sending is triggered off by a time flush trigger,
a size flush trigger or an explicit flush trigger (at the user's
demand). Let's say the queue control is executed in the main thread, in
an update() method, called evenly.
(B) Each connection is handled by a separate thread that sleeps while
not receiving data so that no CPU time will be used if nothing is
received on a particular socket. When incoming data is actually
received, it is put into a global receive queue (synchronized with a
mutex of course) and popped when the user requests to receive a block.
This implementation still raises a few questions:
(A) If no buffer space is available within the transport system to hold
the data to be transmitted, the actual sending will block. Has anybody
come across this case ? When does this happen in practice ?
(B) As we are building a *massively* multiplayer game, we expect to have
a great number of connections (even if all clients won't be connected on
the same machine), therefore a great number of threads. Does anybody
know the scale limits on Linux systems (and on Windows BTW), i.e. the
optimum and maximum thread numbers per process and per system ?
I'm sure a lot of you are great Linux specialists, so you probably have
an idea about this issues.
Thanks.
Olivier Cado
--
<A HREF="http://www.nevrax.org">http://www.nevrax.org</A>
</pre>
<!--endarticle-->
<HR>
<P><UL>
<!--threads-->
<LI> Previous message: <A HREF="000249.html">[Nel] agent service</A></li>
<LI> Next message: <A HREF="000253.html">[Nel] NeL Network Engine</A></li>
<LI> <B>Messages sorted by:</B>
<a href="date.html#252">[ date ]</a>
<a href="thread.html#252">[ thread ]</a>
<a href="subject.html#252">[ subject ]</a>
<a href="author.html#252">[ author ]</a>
</LI>
</UL>
</body></html>
|