aboutsummaryrefslogtreecommitdiff
path: root/pipermail/nel/2001-July/000487.html
diff options
context:
space:
mode:
Diffstat (limited to 'pipermail/nel/2001-July/000487.html')
-rw-r--r--pipermail/nel/2001-July/000487.html147
1 files changed, 147 insertions, 0 deletions
diff --git a/pipermail/nel/2001-July/000487.html b/pipermail/nel/2001-July/000487.html
new file mode 100644
index 00000000..476feec6
--- /dev/null
+++ b/pipermail/nel/2001-July/000487.html
@@ -0,0 +1,147 @@
+<!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 3.2//EN">
+<HTML>
+ <HEAD>
+ <TITLE> [Nel] TCP vs. UDP</TITLE>
+ <LINK REL="Index" HREF="index.html" >
+ <LINK REL="made" HREF="mailto:zane%40supernova.org">
+ <LINK REL="Previous" HREF="000485.html">
+ <LINK REL="Next" HREF="000490.html">
+ </HEAD>
+ <BODY BGCOLOR="#ffffff">
+ <H1>[Nel] TCP vs. UDP</H1>
+ <B>Zane</B>
+ <A HREF="mailto:zane%40supernova.org"
+ TITLE="[Nel] TCP vs. UDP">zane@supernova.org</A><BR>
+ <I>Thu, 5 Jul 2001 12:05:07 -0700</I>
+ <P><UL>
+ <LI> Previous message: <A HREF="000485.html">[Nel] TCP vs. UDP</A></li>
+ <LI> Next message: <A HREF="000490.html">[Nel] TCP vs. UDP</A></li>
+ <LI> <B>Messages sorted by:</B>
+ <a href="date.html#487">[ date ]</a>
+ <a href="thread.html#487">[ thread ]</a>
+ <a href="subject.html#487">[ subject ]</a>
+ <a href="author.html#487">[ author ]</a>
+ </LI>
+ </UL>
+ <HR>
+<!--beginarticle-->
+<PRE>----- Original Message -----
+From: &quot;Vincent Archer&quot; &lt;<A HREF="mailto:archer@frmug.org">archer@frmug.org</A>&gt;
+Sent: Thursday, July 05, 2001 9:04 AM
+
+
+&gt;<i> This post (and the rest of the discussion) do highlight the problems of
+</I>&gt;<i> AO. However, only one post, in the whole thread, seems to be close to the
+</I>&gt;<i> real &quot;problem&quot;.
+</I>&gt;<i>
+</I>&gt;<i> I have two friends who are playing AO together. They often experience
+</I>&gt;<i> &quot;bad lag&quot; (i.e. 20-30s delays between a command and it's execution).
+</I>&gt;<i> However, there's one strange thing during these periods of bad lag.
+</I>
+I also play AO but every time I've experienced real &quot;bad lag&quot; you cannot sit
+or do any action that requires server-side confirmation including ALL chat
+channels. In fact, I frequently will say something or shout something so
+that I know as soon as it shows up it's done lagging and this works like a
+charm. What they're experiencing is a different type of lag than is
+discussed in the above post, that's when a particular zone they're in lags
+(or the server hosting that zone) but not all servers are affected. I
+haven't experienced that type of lag since beta.
+
+As a side note I've noticed that when I get the &quot;bad lag&quot; others around me
+appear to get it too (at least sometimes) which lends weight to the packet
+storm theory.
+
+&gt;<i> My guess is that their architecture is based on a front-end/zone service
+</I>&gt;<i> model. Clients connect to a front end, and said front-end connects to a
+</I>&gt;<i> zone service, depending on the zone you are in. This is further supported
+</I>&gt;<i> by various analysis points during beta, notably when the zone service
+</I>&gt;<i> crashed while I was in it (and the whole mission dungeon got resert and
+</I>&gt;<i> randomly re-rolled), and the numerous problems people have for zoning
+</I>&gt;<i> (zone... after a strangely fixed 45s - the default TCP connection
+</I>timeout -
+&gt;<i> you get &quot;Area Change not initiated on server).
+</I>&gt;<i>
+</I>&gt;<i> So you have:
+</I>&gt;<i>
+</I>&gt;<i> Client ---- TCP ----&gt; Front End ---- TCP ----&gt; Zone server
+</I>&gt;<i> ^ /
+</I>&gt;<i> | /
+</I>&gt;<i> V /
+</I>&gt;<i> Client ---- TCP ----&gt; Front End ---- TCP -/
+</I>&gt;<i>
+</I>&gt;<i> which is probably the worst architecture I can imagine, specially as there
+</I>&gt;<i> appears to be one front-end per client, and front ends closes and opens
+</I>&gt;<i> communication to zone servers. :(
+</I>
+There doesn't need to be one front-end per client. There can be several
+load-balanced front-ends that handle multiple clients each. The major
+problem with this is if one front-end crashes all those clients get dropped
+(although under UNIX (I haven't been able to get win32 to do this) you can
+pull some funky sockets tricks and recover from a crash without dropping
+most players, just majorly lagging them &amp; loosing some updates).
+
+The good side to this is you only need one connection per client per
+protocol (so 2 connections if using both TCP and UDP). Unfortunately with
+TCP that's both a pro and a con. With one TCP connection a dropped packet
+on a chat message delays all other TCP traffic, but it also lessens
+bandwidth and server resources over multiple connections (larger, more
+efficient packets). Also, with a single front end you can have as many
+seperate services as you want without having to have a ton of different
+connections to the client.
+
+Regardless, we have no data as to wether or not AO is doing it that way.
+Maybe tonight I'll run it in windowed mode and check netstat. If we've got
+more than one active TCP connection to Funcom servers than that model
+probably isn't what they're using.
+
+On a side note, using multiple TCP connections would eliminate some of the
+packet-loss latency issues at the cost of increased bandwidth. Say you have
+one connection for chat channels, one for inventory &amp; stat handling, one for
+world actions and one for combat. If connection 1 drops a packet its lag
+won't affect the other connections as much. But of course if they all drop
+packets at the same time we get the packet storm problem again. :)
+
+&gt;<i> Packet loss is a non-sequitur under TCP. You *cannot* lose packets under
+</I>TCP :)
+&gt;<i> (you lose connection first)
+</I>
+Yes but TCP has latency issues, UDP has packet-loss issues. Why can't we
+have the uber protocol that has neither??? :)
+
+BTW, does anyone know if ipv6 has addressed this issue? I'm aware of QoS
+but not sure to what degree they've taken it. Personally I think the only
+way we could get garunteed delivery with low latency is to have each router
+along the way garuntee a packet is delivered (if, of course, it's load
+allows that packet to be accepted in the first place). That way if a packet
+is dropped by a router due to load (or some other issue) the previous router
+expects a timely response and when it doesn't get one it resends or sends
+via a different route. (Of course I would expect a per-packet ack, probably
+a CRC-ack for a certain amount of traffic) The point being the original
+sender should never have to resend as long as the first router gets all the
+packets.
+
+-E.J. Wilburn
+<A HREF="mailto:zane@supernova.org">zane@supernova.org</A>
+
+
+
+</pre>
+
+
+
+
+
+<!--endarticle-->
+ <HR>
+ <P><UL>
+ <!--threads-->
+ <LI> Previous message: <A HREF="000485.html">[Nel] TCP vs. UDP</A></li>
+ <LI> Next message: <A HREF="000490.html">[Nel] TCP vs. UDP</A></li>
+ <LI> <B>Messages sorted by:</B>
+ <a href="date.html#487">[ date ]</a>
+ <a href="thread.html#487">[ thread ]</a>
+ <a href="subject.html#487">[ subject ]</a>
+ <a href="author.html#487">[ author ]</a>
+ </LI>
+ </UL>
+</body></html>