Results 1 to 8 of 8
  1.    #1  
    The default I/O scheduler on the Pre is "cfq"

    This is what Android has been using lately, however

    However, some say that solid state storage media tends to work better with the "noop" or "deadline" based scheduler.

    You can try the following to change to deadline scheduler:

    echo deadline > /sys/block/mmcblk0/queue/scheduler
    echo 1 > /sys/block/mmcblk0/queue/iosched/fifo_batch

    (obviously, these can be placed in a file that is ran at boot-up... for me i put all my tweaks [overclocking, smartreflex and this] into /etc/event.d/optware-overclock), i.e.

    Code:
    # -*- mode: shell-script; -*-
    description "Overclock to 600MHz + SmartReflex"
    author "Alex Markson"
    version 1.0
    
    start on stopped finish
    stop on runlevel [!2]
    
    console none
    
    script
    # SmartReflex
    # "SmartReflex~Y driver allows for auto-compensation of VDD1 and
    # VDD2 voltages (around the voltages specified by current OPP)
    # by analyzing the silicon characteristics, temperature, voltage etc"
    
    # Enable SmartReflex
    #echo -n 1 > /sys/power/sr_vdd1_autocomp
    #echo -n 1 > /sys/power/sr_vdd2_autocomp
    # seems to work best @550mhz static speed [ for me, personally ]
    
    # according to the OEM shell script in /etc/miniboot.sh
    # this seems like it needs to be set twice to make sure ?
    
    #echo 550000 > /sys/devices/system/cpu/cpu0/cpufreq/scaling_setspeed
    #echo 550000 > /sys/devices/system/cpu/cpu0/cpufreq/scaling_setspeed
    
    echo 600000 > /sys/devices/system/cpu/cpu0/cpufreq/scaling_setspeed
    echo 600000 > /sys/devices/system/cpu/cpu0/cpufreq/scaling_setspeed
    
    echo deadline > /sys/block/mmcblk0/queue/scheduler
    echo 1 > /sys/block/mmcblk0/queue/iosched/fifo_batch
    
    
    end script
    so, please feel free to try this i/o scheduler mode and post if you notice any differences. testing is early for me and i am not performing any 'scientific benchmark' in an A/B test (at the moment) to gather data.

    but my anecdotal experience so far is that situations where things would normally do a UI "lock up" it is instead doing a "lag mix" ... if that makes any sense to anyone.
  2. #2  
    Man, I hope we make progress on lag and memory issues. Every time I look at top its always swapping or damn close, and I get a lot of lag and get the too many cards when I shouldn't. Oh well guess I should just stick to dialing numbers and texts. Heh.

    If this looks safe I'll probably try it out.

    Thanks.
  3.    #3  
    there should not be any risk involved in changing the I/O scheduler.

    there is more information about the various schedulers here
    redhat.com/magazine/008jun05/features/schedulers/

    The Completely Fair Queuing (CFQ) scheduler is the default algorthim in Red Hat Enterprise Linux 4. As the name implies, CFQ maintains a scalable per-process I/O queue and attempts to distribute the available I/O bandwidth equally among all I/O requests. CFQ is well suited for mid-to-large multi-processor systems and for systems which require balanced I/O performance over multiple LUNs and I/O controllers.

    The Deadline elevator uses a deadline algorithm to minimize I/O latency for a given I/O request. The scheduler provides near real-time behavior and uses a round robin policy to attempt to be fair among multiple I/O requests and to avoid process starvation. Using five I/O queues, this scheduler will aggressively re-order requests to improve I/O performance.

    The NOOP scheduler is a simple FIFO queue and uses the minimal amount of CPU/instructions per I/O to accomplish the basic merging and sorting functionality to complete the I/O. It assumes performance of the I/O has been or will be optimized at the block device (memory-disk) or with an intelligent HBA or externally attached controller.

    The Anticipatory elevator introduces a controlled delay before dispatching the I/O to attempt to aggregate and/or re-order requests improving locality and reducing disk seek operations. This algorithm is intended to optimize systems with small or slow disk subsystems. One artifact of using the AS scheduler can be higher I/O latency.
  4.    #4  
    Hmm so no one else try this yet ?
    (bump)

    All you need to do is type
    Code:
    echo deadline > /sys/block/mmcblk0/queue/scheduler
    echo 1 > /sys/block/mmcblk0/queue/iosched/fifo_batch
    it's safe, i promise
  5. #5  
    bump
  6. #6  
    You can also tweak this in Govnah now. I've tried the various schedulers, but didn't notice much of a difference.
  7. #7  
    Has anyone tested the different schedulers with higher clock rates? Or will this even matter?

    As the new 1.2Ghz rate doesn't necessarily perform any better than 1Ghz or 800Mhz because of I/O access restrictions, can a different/tweaked scheduler alleviate this bottleneck?
  8. #8  
    Quote Originally Posted by metahacker View Post
    there should not be any risk involved in changing the I/O scheduler.

    there is more information about the various schedulers here
    redhat.com/magazine/008jun05/features/schedulers/
    Thanks for explaining all the settings on Govnah, wonder do you know about the TCP settings as well?
    If this helped you hit thanks.

Posting Permissions