Question / Help Stream struggles ? Help

Xianahru

Member
Why does everyone tell me different stories then?

Paibox told me that the bandwith used would be able to spike up to 5600..
I didn't say that buffer would increase max bitrate tho.
 

Bensam123

Member
Just because someone has a volunteer tag or they offer advice doesn't mean they're well informed in every topic matter.

VBV Buffer has a lot of conflicting information about what it does online. I actually encountered a lot of different theories as to it's use before finding that blog of that which shall not be named. I would assume the authenticity of what's being stated there over a random member on the forums as they're actually producing the software as professionals. Of course they could still also be wrong.


That aside, it just looks like you read what Paibox said in a different manner then what he meant. The buffer is essentially a pool so if you end up using more bits for a certain scene then your bit rate, it will actually eat up some of your buffer until it refills again. The rate at which the buffer fills is limited by your bitrate. So in scenes when you don't require full bandwidth and the buffer is semi-depleted, it'll refill.

So ideally a bigger buffer is better, but it also causes problems, as to how or why I still don't fully understand it. For instance a bigger buffer causes me to spike more bandwidth wise (even though it shouldn't and spikes are just part of VBR anyway). It also causes some intermittent latency issues with me, so I ended up using a bitrate of 3100 and 5000 buffer, higher then that seems to cause issues, but setting it that high in the first place helps with spiky situations where the program wont then dip into the quality setting and reduce the quality of my stream.

I still don't fully understand how all the different parts of streaming interact with each other and it's really hard to find reliable information on it. I definitely know what they do, just they don't always interact with each other in the same way each time I mess around with them (even though they're supposed to do something in particular). Ideally a very large buffer would be great for streaming, but that doesn't seem to be the case. It also seems that rebroadcasters like Twitch are the ones setting the maximum available buffer (which they don't disclose to streamers). Going over this maximum also causes issues. The buffer is also client side, it's not on the streamers end, which is also something to consider.

That's about the jist of it, I'd recommend reading the link for the full explanation though.
 

Lain

Forum Admin
Lain
Forum Moderator
Developer
The only people who are qualified to truly speak anything on the codec are the people who understand the codec, understand the codec's code, and are willing to actually put their faces directly into the code of the codec itself (which let me tell you is quite unpleasant because their code is ungodly devilish to read, though extremely optimal). The best people to talk to on the subject are the x264 devs themselves, and the people who originally designed the actual codec ISO specifications. I would warn anyone else, that even though I've spent time in the x264 code, what I say now is only based upon my limited experience with it. I will be the first to say I do -not- understand every single inner working of this codec. It's too complex and I don't have the time, so I am not a qualified person to say definitively how the encoder works, and I can guarantee you right now neither are the guys whos blog you read at that other program which shall not be named (and let's be honest, how many C# developers do you know with an mathematical degree? "They asked if I had a degree in theoretical physics. I told them I had a theoretical degree in physics, and they said welcome aboard"), but I can tell you what I have learned based upon my experience writing this program.

So ideally a bigger buffer is better, but it also causes problems, as to how or why I still don't fully understand it

Throw away your preconceived notion of "kilobits per second" that you use for vbv-maxrate. Though you enter a per-second value, the rate sent out is not going to be the max you send out in every single given second of the stream. It's the "average" rate for the whole stream which is based upon the buffer size, not "max allowed in any given second of the stream". Again, vbv-maxrate means "average max per second for the stream itself", not actual max each second. It is based upon the buffer size, and the higher the buffer size, the more data that can be sent out at a given time.

For example, if you set your stream with a 1000 kb/s maxrate, stream for 5 minutes, then regardless of what your buffer size is, if you divide your total bits sent by 300 seconds, you will almost always get very close to 1000 kb/s. However, if you measure each second individually, you'll see that with a higher buffer size it will fluctuate higher because the buffer is allowed to use more data before it has to be sent out, it's allowed to create larger frames, and thus larger fluctuations in transmission, which in the end means poor QoS handling of the packets for both the streamer and client because the packet sizes keep going wild. I've measured this many times myself.

If you compile the project and make the preview use CreateBandwidthAnalyzer() for the OBS::network variable instead of a CreateNullNetwork(), it shows you both the max per stream, and the max total that was sent out in a given second. The more you increase the buffer size, the larger "max sent out in a given second" increases (though you do need to make sure your stream is sufficiently complex).

Things like "minimize network impact" can help with the QoS problem caused by large buffer sizes by splitting the packets up and sending them out in properly timed intervals, but that results in more TCP acknowledgements which can increase the chance of frame drops, requires more client-side buffering time, and thus is also somewhat problematic in its own regard. Twitch itself may or may not decide to send the entire frames as a whole instead of splitting them up either, so the client could suffer some serious QoS issues as well.

The buffer is also client side, it's not on the streamers end, which is also something to consider

This is incorrect. It's both, what is sent is received. The streamer is the one who creates the buffer in the first place, if it was not on the streamers' end it would not be an option for the streamers' encoder.
 

hilalpro

Member
Jim said:
Things like "minimize network impact" can help with the QoS problem caused by large buffer sizes by splitting the packets up and sending them out in properly timed intervals, but that results in more TCP acknowledgements which can increase the chance of frame drops, requires more client-side buffering time, and thus is also somewhat problematic in its own regard. Twitch itself may or may not decide to send the entire frames as a whole instead of splitting them up either, so the client could suffer some serious QoS issues as well.
I believe you can also trigger the encoder to slice frames to a size matching the mss, it will make sure that each slice fill the size of a segment and each frame saturate a number of packets for a better packet flexibility it should help especially with the timed intervals concept due to how it works.
 

Bensam123

Member
I've fully read the blog and your post Jim, they largely mirror each other except the wording is slightly different. I used 'pool' to denote that it's the maximum theoretical available at a given moment, but not the average rate of the player.

Things like "minimize network impact" can help with the QoS problem caused by large buffer sizes by splitting the packets up and sending them out in properly timed intervals, but that results in more TCP acknowledgements which can increase the chance of frame drops, requires more client-side buffering time, and thus is also somewhat problematic in its own regard.

This is a answer I've been looking for for a long time and makes complete sense. A good question for this, is this a fix for a LAN or for a WAN? Cause if it's a fix for LAN I probably have some issues in my network I should probably be cleaning up. Now onto a definition for automatic low latency mode.

This is incorrect. It's both, what is sent is received. The streamer is the one who creates the buffer in the first place, if it was not on the streamers' end it would not be an option for the streamers' encoder.

AFAIK what they're talking about in the blog is while you set the buffer on the stream end (which is true), it doesn't actually place till the client receives the data. It's a client player buffer.

If this was simply a stream buffer on the streamers computer, why wouldn't you set it to something like 150000? It does make sense to smooth out network transmissions by using a buffer, but I haven't actually seen this happen... There would then be a maximum network rate option available instead of a max bit rate (which refers to the actual encoder preset) or both. VBR is very spiky, changing the buffer doesn't seem to elevate this (monitoring the network activity).

I am just going off what I've seen and what I've been able to piece together, but if I had to hypothesize there almost seems like two different ways of looking at this (perhaps you know because you've been working with the code).

The whole flash streaming protocol seems very basic and almost completely designed around the encoder, instead of around the fact that you're streaming across a WAN.

If it took into account you were streaming across the internet, there would be a option for the maximum theoretical transfer rate and then a very big buffer to smooth things out, then when it reaches the clients computer it would recompile the stream in a normal encoder fashion in order to replicate the data, but instead we have this:

Max bit rate > Twitch server > buffer in the clients player > video

What it should be is:

Encoder max bit rate > encoder max buffer > network buffer > network max bit rate > Twitch

And then should be decompiled on the receiving end in reverse order, which is usually the way network protocols work. In the above case you would see very smooth network transfer windows unless the network buffer runs out in which case everything would then explode. If the two were firmly interwoven you could combine the network buffer and the encoder buffer as well, but I doubt this would happen as the flash streaming protocol seems like a wrapper for the encoders themselves.

As it stands though what really smooths out network transmissions is CBR, which fills the buffer at a constant rate. The only reason filling a buffer at a constant rate would smooth out network transmissions is if the buffer was not on the sending end and really has nothing to do with the network. The buffer is constantly being filled on the client end. It seems almost as if the flash streaming protocol doesn't even take into account there is a network in between you and the receiving end of the stream and that's really why we run into so many issues with networking.

Reading snippet:

http://en.wikipedia.org/wiki/Video_buffering_verifier
 

Rough

Member
So I have another question regarding my stream..
So far I'm using:
CBR On
Max Bitrate 2500
Buffer Size 2500
Minimize Network Impact On
Resolution downscale 1.50 > Lanczos
48 Fps
Everything good so far..
I was thinking about that Minimize Network Impact thingy..
So,logically,I started to test couple things.
I did a ping test to my local website when I started to stream.
These are my results..

Without Minimize Network Impact:
Ping statistics for 192.118.68.136:
Packets: Sent = 400, Received = 400, Lost = 0 (0% loss),
Approximate round trip times in milli-seconds:
Minimum = 17ms, Maximum = 152ms, Average = 46ms
With Minimize Network Impact:
Ping statistics for 192.118.68.136:
Packets: Sent = 400, Received = 400, Lost = 0 (0% loss),
Approximate round trip times in milli-seconds:
Minimum = 17ms, Maximum = 86ms, Average = 35ms
With Minimize Netowrk Impact + Automatic low latency mode:
Ping statistics for 192.118.68.136:
Packets: Sent = 400, Received = 400, Lost = 0 (0% loss),
Approximate round trip times in milli-seconds:
Minimum = 17ms, Maximum = 57ms, Average = 20ms

Look on the results of all 3 tests..Once again,it was a logic way to do it,from my point of view..
The third test got me the best results of pings..That means I won't spike that much (?)...
Should I stay with the Minimize Netowrk Impact + Automatic low latency mode ?
 

Bensam123

Member
That's what I do Rough and I haven't found a downside to using both yet. I occasionally disconnect and reconnect, but the last few days I've streamed that hasn't happened.

My results about mirror yours only automatic low latency mode reduced maximum spikes of about 10x the average for me.

While streaming I also keep a ping google.com -t window open on my second monitor so I can monitor my ping, which I find really helpful for diagnostic purposes. Google servers are also very reliable.

Based off what Jim said for minimize network impact, if your computer is putting out more TCP packets that also means increased CPU usage (which he didn't mention), although most nics can offload this. I personally haven't had any issues with dropped frames and I haven't had dropped frames in quite a few broadcasts. Of course that may vary on a per user bases.

Nice testing btw Rough, I never thought about ending my ping test results at the end of the broadcast and looking at their stats.
 

Rough

Member
Thanks,but...
From what I've read here,and I've read alot,everybody telling everybody no to use the automatic low latency mode..
I'm just curious - WHY not ?
 

Bensam123

Member
It was never explained why not to use it in that thread, only that it was unnecessary to do so.

"Things like "minimize network impact" can help with the QoS problem caused by large buffer sizes by splitting the packets up and sending them out in properly timed intervals, but that results in more TCP acknowledgements which can increase the chance of frame drops, requires more client-side buffering time, and thus is also somewhat problematic in its own regard. Twitch itself may or may not decide to send the entire frames as a whole instead of splitting them up either, so the client could suffer some serious QoS issues as well."

That would be why not... I haven't experienced any downside yet and I monitor my broadcasts on a separate computer while casting.
 

Bensam123

Member
I was just reiterating what Jim wrote, but I'm guessing he was talking about quality of service in general in regards to how OBS deals with your connection, not actual settings for it found in Windows or on your router.

I don't know about the other two.
 

Bensam123

Member
I don't actually tweak my network stacks as I find that's usually bad for the rest of the OS. Interesting read though Rough.

I did have a catastrophe that happened in the last week though and I lost all my data so I installed W8. It appears that Automatic Low Latency mode now causes OBS to drop a huge number of frames, even though it perfectly stabilizes my ping. Minimize network impact is still needed and used though. I'm still tweaking W8 as it isn't up to snuff compared to W7.
 

Rough

Member
Im still using minimize network impact + automatic low latency while streaming diablo3/apb:r/some other games,and zero frames were dropped in obs.
Could it be that W8 using some sort of different networking ?
Or maybe we both using different versions of OBS? Here I'm using toast build v0.52.04b
 

Bensam123

Member
I'm guessing W8 does something funky and Automatic Low Latency mode was tuned for W7. I tried messing around with the tuning factor, but that didn't seem to change much (it only made it worse).
 

Rough

Member
Btw,
Someone linked me to summit1g stream yesterday http://www.twitch.tv/summit1g
And in his info,it says that he's using in his xplit:
1280x720
framerate 30
bitrate 2500
buffer 2500

He was playing CS:GO on some surf map,and he's fps was way better than 30.
Well,it just doesn't seems like its a 30 fps with 2500 rate/buffer..
Is there any way to check stream's fps?
 
Top