Results 1 to 15 of 15
Like Tree8Likes
  • 1 Post By kkhanmd
  • 3 Post By pa28pilot
  • 4 Post By unixpsycho
  1.    #1  
    i'm running uberkernel and govnah, and when i updated i just saw these two pop up recently. i don't know what they are so i just left it alone. can someone explain to me what they are, and what is the best?

    and oh, i found an io congestion benchmark, and apparently anticipatory is the best. source: http://forums.precentral.net/web-os-...ng-bonnie.html

    hope someone benchmarks tcp congestion too. thanks in advance!
  2. #2  
    I asked this in the govnah thread and no response yet. Online search yields pages with a lot of info but no clear answer (wikipedia, for ex). wikipedia explains the different terms (city names) for tcp congestion, but obviously none of us will know until rwhitby or the creater of govnah educates us.
    Last edited by NABRIL15; 08/05/2010 at 08:22 AM.
  3. #3  
    These are advanced controls for the kernels. It's probably best to leave them at default. You can learn more here: Let me google that for you and here: http://tinyurl.com/2cb64lz
    "Patience, use the force, think." Obi-Wan


    Ready to try Preware? Get this first: Preware Homebrew Documentation
  4. #4  
    I did google. Most explanations are heavy linux ones, not necessarily best for pre.

    so now we need to choose between reno and the other cities.
  5. #5  
    Quote Originally Posted by NABRIL15 View Post
    I did google. Most explanations are heavy linux ones, not necessarily best for pre.

    so now we need to choose between reno and the other cities.
    Lol, I know funny names right? Mine is working fine on the default which is cubic. So I guess I'll leave it until I learn of a setting that might be better.
    "Patience, use the force, think." Obi-Wan


    Ready to try Preware? Get this first: Preware Homebrew Documentation
  6.    #6  
    i read from the pixi forums that they found that westwood is the one for them. so i just followed what they had.
  7. #7  
    The Completely Fair Queuing (CFQ) scheduler is the default algorthim in Red Hat Enterprise Linux 4. As the name implies, CFQ maintains a scalable per-process I/O queue and attempts to distribute the available I/O bandwidth equally among all I/O requests. CFQ is well suited for mid-to-large multi-processor systems and for systems which require balanced I/O performance over multiple LUNs and I/O controllers.

    The Deadline elevator uses a deadline algorithm to minimize I/O latency for a given I/O request. The scheduler provides near real-time behavior and uses a round robin policy to attempt to be fair among multiple I/O requests and to avoid process starvation. Using five I/O queues, this scheduler will aggressively re-order requests to improve I/O performance.

    The NOOP scheduler is a simple FIFO queue and uses the minimal amount of CPU/instructions per I/O to accomplish the basic merging and sorting functionality to complete the I/O. It assumes performance of the I/O has been or will be optimized at the block device (memory-disk) or with an intelligent HBA or externally attached controller.

    The Anticipatory elevator introduces a controlled delay before dispatching the I/O to attempt to aggregate and/or re-order requests improving locality and reducing disk seek operations. This algorithm is intended to optimize systems with small or slow disk subsystems. One artifact of using the AS scheduler can be higher I/O latency.
    If this helped you hit thanks.
    mdsf likes this.
  8. #8  
    ok, so you pasted the definitions from a webpage. Now. What do those mean in English for the Pre or Pixi and uberkernel or thunderchief or sr71?
  9. #9  
    Quote Originally Posted by kkhanmd View Post
    The Completely Fair Queuing (CFQ) scheduler is the default algorthim in Red Hat Enterprise Linux 4. As the name implies, CFQ maintains a scalable per-process I/O queue and attempts to distribute the available I/O bandwidth equally among all I/O requests. CFQ is well suited for mid-to-large multi-processor systems and for systems which require balanced I/O performance over multiple LUNs and I/O controllers.

    The Deadline elevator uses a deadline algorithm to minimize I/O latency for a given I/O request. The scheduler provides near real-time behavior and uses a round robin policy to attempt to be fair among multiple I/O requests and to avoid process starvation. Using five I/O queues, this scheduler will aggressively re-order requests to improve I/O performance.

    The NOOP scheduler is a simple FIFO queue and uses the minimal amount of CPU/instructions per I/O to accomplish the basic merging and sorting functionality to complete the I/O. It assumes performance of the I/O has been or will be optimized at the block device (memory-disk) or with an intelligent HBA or externally attached controller.

    The Anticipatory elevator introduces a controlled delay before dispatching the I/O to attempt to aggregate and/or re-order requests improving locality and reducing disk seek operations. This algorithm is intended to optimize systems with small or slow disk subsystems. One artifact of using the AS scheduler can be higher I/O latency.
    That is very interesting. Does anyone have any experience with which of these seems to perform best on the Pre, either in terms of maximizing system performance or minimizing battery usage? I've been looking for ways to improve battery life without making my Pre run sluggishly like is experienced during underclocking. Forum searching and googling hasn't done much for me other than reminding me I'm not a linux guru.
  10. #10  
    Any more word on what settings seem best on the various flavors of pres and kernals?

    I am on original Sprint Pre and Uberkernal and would be interested in knowing if any other setting besides the default ones are worth it.
  11. #11  
    I'm not so much a scheduler guy, but a long time ago I used to be a protocol stack developer, so I was pleased to see the TCP congestion window options show up in Govnah. I'll take a crack at trying to explain what all this congestion window stuff is, and I apologize in advance to my fellow engineers for oversimplification.

    For folks who aren't system programmers, these choices address how the operating system kernel manages flows of information in and out of the kernel, which is at some level the "switchboard operator" of your handset.

    The TCP congestion window relates to network programs using TCP (Transmission Control Protocol), which is the main connection-oriented protocol upon which things are built in the TCP/IP networking world. Things like SMTP for mail, and HTTP for web browsing are all TCP-based. There are connectionless protocols as well, but they usually use something called UDP.

    Because TCP is predicated on the idea of "connections," the systems involved in a communication keep track of the state. For example, there is a "handshake" involved at the beginning of a connection, and the ends will progress through several states until the connection is "established." The same is true when tearing a connection down.

    In the course of sending data between systems, the goal is to send as much data as you can to the other system as quickly as it can consume and acknowledge it, but without hogging all of the bandwidth on the link and while remaining able to handle special urgent things that might happen on the network. It turns out that this is fairly difficult, and becomes far more so in the event that the connection might lose packets frequently.

    Normally, you send a packet, the other side sends an acknowledgement, and you then send your next packet. Think about what happens when for whatever reason you don't receive the acknowledgement. Did the other side get the packet and the acknowledgement get lost? Or did the packet never make it there in the first place? This brings us to the world of retransmission and "windows."

    Latency (the delay in packets getting to where they are going) adds another factor to this whole business. I want to wait enough time for acknowledgements to get to me, because retransmitting unnecessarily will really slow things down, but if I wait too long, I have the same problem, so how long is enough?

    So somehow we have to juggle sending things as quickly as we can while waiting long enough, but not too long for acknowledgements, given the lossiness and latency of our link to the other system, and while using our bandwidth efficiently, but not hogging so much that we can't receive new connections or do other things with the network that we want to do at the same time.

    The approach usually taken is a "window." This means that I'll send a set of packets, and wait a finite amount of time for the acknowledgements of those packets to come back before I retransmit them, assuming they were lost on the way to the other end. There are many approaches for choosing the size of a window. Some use a fixed size and wait times, others dynamically vary the window size based on factors such as throughput and latency or "round trip time." The latter are often called "sliding windows."

    The other big part of TCP congestion control is something called "slow start." This is a feature of many well-designed kernel network protocol stacks that prevents a new connection from blasting so much data out onto the network before listening to see how well it's getting to its destination that other services can't use the network. The idea is to send enough data to make sure you have a good link and then open up the hose over time.

    The TCP congestion control algorithm balances things like latency, round trip time, packet loss, bandwidth and the like, in an attempt to optimize the throughput of TCP network connections. There's no such thing as a perfect algorithm, though some features like "slow start" are widely considered to be important basic functionality.

    Some algorithms are optimized for high throughput, low delay networks, others for networks with significant packet loss or wide swings in latency. They often get named for their developer, or the version of the BSD Unix operating system kernel in which they first were implemented. That's where some of the city names come from.

    Depending on conditions, your wireless handset's network might look more like one of these than the other. For example, if you are often using your handset in an area with a lot of RF interference, it's possible that it might be experiencing significant packet loss, while in other areas right at the foot of a tower, it might look like a less lossy network.

    For this reason, it's extremely difficult to identify one perfect algorithm. Everyone's environment is different, often many times within a day. Their network usage patterns are also different. Someone who lives on instant messaging might care a lot more about latency than someone who only uses background fetches of mail. In that case, it might make sense to choose congestion control algorithms that try to minimize latency regardless of throughpout, while for someone else the volume of data transmitted might matter more than the delays along the way.

    Radically tweaking your networking kernel options can break things rather badly, so it's best done with caution. You can create situations where your network pretty much stops working or becomes slow, clogged, or flakier than it already is with wireless packet loss. It may not hurt you to experiment, but it's a very good idea to note what your settings are before changing anything.

    I know this isn't what many users want to hear but there's no one magic setting that will make your Pre suffer no packet loss, network latency or always have coverage. If you haphazardly change these settings, you may become more frustrated and miserable than you might already be about network performance. On the other hand, if you carefully change one thing at a time, keep track of your changes and performance, and do it in a controlled way, you might find settings that work better for you than the default.

    Just don't expect miracles. At some level, the miracle is that this Internet of ours works as well as it does every day.
    mdsf, ScottTheSCOT and tobeaman like this.
  12. #12  
    ok. Good write up professor. Wow. Lot of words. Thank god for ctrl v.

    so, as was asked, has anyone played with the diff settings and can suggest which ones to use?
  13. eps1lon3's Avatar
    Posts
    97 Posts
    Global Posts
    141 Global Posts
    #13  
    @pa28pilot, thanks for the writeup. It helped a great bunch on understanding the whole ordeal. I'm still not sure on the different options presented to us in UberKernel, so i googled them. For TCP Congestion it lists westwood, cubic, reno, bic, htcp, veno, and illinois. Now as you said there is no single one that could suit all of our needs, but i think i gained the gist of each one from reading. From what I read, Veno and Westwood are best suited for mobile needs. Now all thats left is some real world testing for me to do.

    Update: From some shorthand tests using a video on youtube, veno is slower than westwood.
    Last edited by eps1lon3; 08/18/2010 at 03:31 PM.
  14. #14  
    Quote Originally Posted by eps1lon3 View Post
    @pa28pilot, thanks for the writeup. It helped a great bunch on understanding the whole ordeal. I'm still not sure on the different options presented to us in UberKernel, so i googled them. For TCP Congestion it lists westwood, cubic, reno, bic, htcp, veno, and illinois. Now as you said there is no single one that could suit all of our needs, but i think i gained the gist of each one from reading. From what I read, Veno and Westwood are best suited for mobile needs. Now all thats left is some real world testing for me to do.

    Update: From some shorthand tests using a video on youtube, veno is slower than westwood.
    I wouldnt say it's slower so much. Westwood will give better overall performance over WiFi and 3G than the others. 3G is a noisy and latent transmission which Westwood will help smooth it out.

    I switched my kernels long ago to use Westwood and even earlier YeAH. The default Palm kernel uses cubic.

    Also to add; IO scheduling on solid state is kinda pointless. it really depends on the flash device. There are no moving heads to consolidate and sort IO. So technically NOOP is the one to use. But again, the flash is the gating factor...so CFQ or AS will give a slight edge to dodgy flash devices. With some IO tuning at a lower layer you can get much better throughput with NOOP.
    Live free or DIE!
  15. mdsf's Avatar
    Posts
    2 Posts
    Global Posts
    7 Global Posts
    #15  
    @pa28pilot & unixpsycho

    Excellent explanations!

    Thank you!

Posting Permissions