News:

Latest versions:
Server plugin: 0.5.1
MVP dongle: 0.5.2
Raspberry Pi client: 0.5.2
Windows client: 0.5.2-1

Main Menu

Network Problems

Started by kdeiss, October 11, 2005, 14:28:06

Previous topic - Next topic

kdeiss

Hi all,

I read in another forum that the mvp has problems in mixed networks (falls to half-duplex) under some circumstances. With the vomp plugin/server we have observed sometimes also network problems (channel not available / cannot play recording etc).

I played a lot around that problem and I figured out that if one of the partners (the vdr or the mvp) is operating on 10 MBit/ half or full duplex, vomp does not operate properly. Please to all: Could you reproduce this behaviour ?

It's easy to test that by setting eth0 of the vdr to 10MBit mode with the tool mii-diag. The call is

mii-diag -F <val>

   -F  --fixed-speed <speed>
        Speed is one of: 100baseT4, 100baseTx, 100baseTx-FD, 100baseTx-HD,
                         10baseT, 10baseT-FD, 10baseT-HD


I tested it on two different networks, on both I could reproduce it. Thanks a lot


Klaus

Chris

Brilliant, I never knew you could do that. You learn something every day!


Right, I know what the problem is.
For a start, the MVP is disabled down to half duplex all the time in the kernel because the buffers are too small or something - this was research done before I even bought an MVP. But, the problem goes a bit further when it's a 10Mb network. For some reason the packets are much smaller on a 10Mb network and are also arriving at bad times for the MVP network chip which is basically pathetic. The folks over at MVPMC have worked on this one for a while and I had it shelved in my brain if I ever had problems like it, so fortunately I can now present the solution... (It's just a shame it took so long to realise that this *is* that problem..)

As I am flying away tomorrow I don't have time to work out how to do this properly, but I will when I get back. If you have set up a dev system for yourself and you can compile the client, do the following:
1. At around line 260 of tcp.cc in the client you will find:

    else
    {
      if (++readTries == 100)
      {
//        Log::getInstance()->log("TCP", Log::ERR, "Too many reads");
//        return 0;
      }
    }

As you can see here I have commented out the two lines within the block. This allows the client to receive in excess of 100 packets per video block call.

2. Instruct TCP on the MVP to only advertise a maximum tcp window size of 4k by doing something like the following:

route del -net 10.0.0.0 netmask 255.255.255.0
route add -net 10.0.0.0 netmask 255.255.255.0 dev eth0 window 4096

Edit that for your actual network number (mine is 10.0.0.0). This needs to be done in a script for obvious reasons, and needs to be done before vompclient runs, so add it to the startup script or something.

I am watching a smooth picture right now, using a 10Mb half dupex server link. (Using the mii-tool thingy!).

If anyone has any suggestions on how to fix this permenantly please put them forward. Obviously I will re-enable the read packets limit but just up the number, but the question is what to do about the window size. Perhaps just set it to 4096 for everything?

kdeiss

great !

Thanks a lot.
I think it was difficult to detect and to find an enviroment in which it (definitivly) not was working.
Wen we'll you be back, I'am also so impatient to see the new radio function.....

klaus

Chris

In just over a week. I'm fairly sure I know how to fix radio too...

kdeiss

Hi Chris,

I've played a lot around with the network, here my results. Vomp is now running nearly perfect with the wlan adapter, and this was the reason that I did all this experiments.

I also checked the mailing list from mcpmc and found this article (route with window 4096). But I personally can't reproduce that behaviour. I wrote a little prog in c, that checks out the current network-address, and with that adress I do an automatic set of the route with that specific window size (during boot - before the client starts).

The result is that the network is slowing down significant. With this window size I'am not able to view a live-signal / or a recording from a dvb-s vdr (stuttering pictures /broken sound). From a dvb-t vdr it's possible, cause the bandwith is smaller. (all in the wlan segment).

Curious: If I copy a file (inside the mvp) from one nfs-volume to annother there is nearly no difference (with or without window size), but streaming seems to be annother thing.

So I decided to hook into your programming (very very interesting - perhaps some more comments ....) and wrote me my own class. Inside I implemented a text-window in which I can see the result of the ifconfig command. So I can monitor on TV whats's going on. Indeed we have some overruns in the wlan segment. I started the mvp this mornig with live-tv and now (may be 6 hours later) I have ca. 1100 overruns (RX packets). But I can access the box from outside, the ip stack does not crash. So I'am not really sure if we need this route command, I think we need more experience.

Annother thing is my 10MBit segment, there are more problems, after booting the new image with the patched client and no route command, there are more then 1000 (overrun) errors already after 5 minutes of viewing live-tv, the picture is stuttering sometimes - sound also. But also here: I can access
the box from outside, the ip stack does not crash, and I have something on TV. Don't know if there is a lot of traffic inside this segment - perhaps the neighbours are heavily accessing the network, I have to investigate this.


Thanks again for your excellent work!

I hope you enjoy your travel......

Klaus