Page 2 of 6 FirstFirst 123456 LastLast
Results 21 to 40 of 117
Like Tree1Likes
  1. #21  
    Powersave bias is basically a limiter. If the kernel determines the CPU frequency needs to jump up, the Powersave bias will decrease the target frequency by the amount you selected (in the list it's listed as tenths of a percent).

    I just wish the webOS Internals guys would finally tell us the difference between ondemand and ondemandtcl, as well as what the latter's max tickle/floor window settings are.
  2. lotuskid's Avatar
    Posts
    77 Posts
    Global Posts
    82 Global Posts
    #22  
    just switched from a pre to a pixi.

    my govnah settings were ondemandctl, with 122 mhz min and 600 mhz max, compcache enabled to 16mb, and everything else is stock.

    I've been using those settings for the two days I've had the pixi and have been more than satisfied with its performance (this coming from a guy who's used to a 1ghz overclocked pre)

    admittedly I don't know what most of the settings mean so I will try suzuki's and post back
  3. errbin's Avatar
    Posts
    256 Posts
    Global Posts
    451 Global Posts
    #23  
    I use Suzuki's setting too it runs smooth and it's battery life friendly
    but if my charger is near I switch it to

    performance
    min freq 122.8
    max freq 600
    compcache 24mb

    it keeps up with my wife's bone stock pre on wifi (I have no wifi)
    but drains the battery quick

    she keeps her pre on stock she's trading it in for a EVO in a couple of month
    but I keep itching to add preware and overclocking it
    she won't let me
  4.    #24  
    Thanks guys, you make me feel like i'm a genius lol. I'll admit in the beginning i was more lost than most of you, but after playing with it, doing some research and getting the cold shoulder from some of the smarter people here on this forum when i asked questions it pushed me to really learn about it. I have a pretty firm grasp on the stuff now.
  5. lotuskid's Avatar
    Posts
    77 Posts
    Global Posts
    82 Global Posts
    #25  
    after a few hours using your settings suziki I gotta say I'm pleased! I haven't done any numbers, but along with the faster card animation patch, these settings seem to be working well. The phone is perceivably zippier, which is just as well if you ask me.
  6.    #26  
    Just to let you guys know, there's new updates in preware for both uberkernel and govnah.
  7. errbin's Avatar
    Posts
    256 Posts
    Global Posts
    451 Global Posts
    #27  
    did the updates change anything
  8.    #28  
    It added an advanced option in govnah and i think the new options in there directly relate to the uberkernel update.
  9. errbin's Avatar
    Posts
    256 Posts
    Global Posts
    451 Global Posts
    #29  
    urg.... Going to have to research how to use the advance setting lol
  10.    #30  
    Same here lol, i'll post back if i find anything substantial. I haven't even played with it yet.
  11.    #31  
    Keeping all old settings and adding:

    I/O Scheduler - Anticipatory
    TCP Congestion - BIC
    Last edited by SuzukiGS750EZ; 07/24/2010 at 04:16 PM.
  12.    #32  
    Except... they don't seem to be sticking.
  13. #33  
    wow idk if my kernel was messed up or something but i just updated it along with govnah with suzukis settings (thanks!) and its running amazingly smooth and responsive and quickkk...love my pixi again!
  14. lotuskid's Avatar
    Posts
    77 Posts
    Global Posts
    82 Global Posts
    #34  
    after making mods under advanced settings my changes manage to stick.

    maybe creating a new profile for the changes would do the trick
  15.    #35  
    No they stick, but when you restart the phone they revert back. This wasn't the final release for that, they shouldn't stay after a reboot.
  16. errbin's Avatar
    Posts
    256 Posts
    Global Posts
    451 Global Posts
    #36  
    what does I/O Scheduler on anticipatory, noop, deadline, and cfq mean

    what does TCP Congestion on bic, cubic, reno, westwood, htcp, and veno mean

    I have both set

    I/O Scheduler - anticipatory
    TCP Congestion - bic


    I would like to know what it means???


    edit: nevermind I'm googling it
    Last edited by errbin; 07/24/2010 at 10:52 PM.
  17. errbin's Avatar
    Posts
    256 Posts
    Global Posts
    451 Global Posts
    #37  
    Suzuki is your pixi overheating while on touchstone???

    my pixi is heating up to 104-105 degrees while charging since the update
    before it would be @ 99-100 degrees
    Last edited by errbin; 07/24/2010 at 11:04 PM.
  18. errbin's Avatar
    Posts
    256 Posts
    Global Posts
    451 Global Posts
    #38  
    won't the TCP Congestion set to westwood be better?


    TCP Westwood (TCPW), is a sender-side-only modification to TCP NewReno that is intended to better handle large bandwidth-delay product paths (large pipes), with potential packet loss due to transmission or other errors (leaky pipes), and with dynamic load (dynamic pipes).

    TCPW relies on mining the ACK stream for information to help it better set the congestion control parameters: Slow Start Threshold (ssthresh), and Congestion Window (cwin). In TCPW, an "Eligible Rate" is estimated and used by the sender to update ssthresh and cwin upon loss indication, or during its "Agile Probing" phase, a proposed modification to the well-known Slow Start phase. In addition, a scheme called Persistent Non Congestion Detection (PNCD) has been devised to detect persistent lack of congestion and induce an Agile Probing phase to expeditiously utilize large dynamic bandwidth.


    The resultant performance gains in efficiency, without undue sacrifice of fairness, friendliness, and stability have been reported in numerous papers that can be found on this web site. Significant efficiency gains can be obtained for large leaky dynamic pipes, while maintaining fairness. Under a more appropriate criterion for friendliness, i.e. "opportunistic friendliness", TCPW is shown to have good, and controllable, friendliness.


    TCP BIC
    Binary Increase Congestion control is an implementation of TCP with an optimized congestion control algorithm for high speed networks with high latency (called LFN, long fat networks, in RFC 1072). BIC is used by default in Linux kernels 2.6.8 through 2.6.18.
  19. #39  
    I use the defaults with compcache switched on at 16MB. I'm okay with my Pixi's speed, just not the "Too many cards" error with only one card open. Compcache seems to have eradicated that error for now.
  20. errbin's Avatar
    Posts
    256 Posts
    Global Posts
    451 Global Posts
    #40  
    which I/O Scheduler to go with???


    Anticipatory scheduling is an algorithm for scheduling hard disk input/output. It seeks to increase the efficiency of disk utilization by "anticipating" synchronous read operations.

    "Deceptive idleness" is a situation where a process appears to be finished reading from the disk when it is actually processing data in preparation of the next read operation. This will cause a normal work-conserving I/O scheduler to switch to servicing I/O from an unrelated process. This situation is detrimental to the throughput of synchronous reads, as it degenerates into a seeking workload. [1] Anticipatory scheduling overcomes deceptive idleness by pausing for a short time (a few milliseconds) after a read operation in anticipation of another close-by read requests.[2]

    Anticipatory scheduling yields significant improvements in disk utilization for some workloads.[3] In some situations the Apache web server may achieve up to 71% more throughput from using anticipatory scheduling.[4]

    The Linux anticipatory scheduler may reduce performance on disks using TCQ, high performance disks, and hardware RAID arrays.[5] An anticipatory scheduler (AS) was the default Linux kernel scheduler between 2.6.0 and 2.6.18, by which time it was replaced by the CFQ scheduler.

    The NOOP scheduler inserts all incoming I/O requests into a simple, unordered FIFO queue and implements request merging.

    The scheduler assumes I/O performance optimization will be handled at some other layer of the I/O hierarchy; e.g., at the block device; by an intelligent HBA such as a Serial Attached SCSI (SAS) RAID controller or by an externally attached controller such as a storage subsystem accessed through a switched Storage Area Network)

    NOOP scheduler is best used with solid state devices such as flash memory or in general with devices that do not depend on mechanical movement to access data (meaning typical "hard disk" drive technology consisting of seek time primarily, plus rotational latency). Such non-mechanical devices do not require re-ordering of multiple I/O requests, a technique that groups together I/O requests that are physically close together on the disk, thereby reducing average seek time and the variability of I/O service time.


    The goal of the Deadline scheduler is to attempt to guarantee a start service time for a request[1]. It does that by imposing a deadline on all I/O operations to prevent resource starvation. It also maintains two deadline queues, in addition to the sorted queues (both read and write). Deadline queues are basically sorted by their deadline (the expiration time), while the sorted queues are sorted by the sector number.

    Before serving the next request, the Deadline scheduler decides which queue to use. Read queues are given a higher priority, because processes usually block on read operations. Next, the Deadline scheduler checks if the first request in the deadline queue has expired. Otherwise, the scheduler serves a batch of requests from the sorted queue. In both cases, the scheduler also serves a batch of requests following the chosen request in the sorted queue.

    By default, read requests have an expiration time of 500 ms, write requests expire in 5 seconds.

    The kernel docs suggest this is the preferred scheduler for database systems, especially if you have TCQ aware disks, or any system with high disk performance[2].

    CFQ, also known as "Completely Fair Queuing", is an I/O scheduler for the Linux kernel which was written in 2003 by Jens Axboe.

    CFQ works by placing synchronous requests submitted by processes into a number of per-process queues and then allocating timeslices for each of the queues to access the disk. The length of the time slice and the number of requests a queue is allowed to submit depends on the IO priority of the given process. Asynchronous requests for all processes are batched together in fewer queues, one per priority. While CFQ does not do explicit anticipatory IO scheduling, it achieves the same effect of having good aggregate throughput for the system as a whole, by allowing a process queue to idle at the end of synchronous IO thereby "anticipating" further close IO from that process. It can be considered a natural extension of granting IO time slices to a process
Page 2 of 6 FirstFirst 123456 LastLast

Posting Permissions