Header: I did a bit of ground work for this in the past but decided to condense the "How to make the most of it" and make it separate from the "How to get it to work" thread. If you have questions, make sure you read through the entirety of this first.
Disclaimer:
Cryptic / Atari does not provide support for this platform/application, so do not ask them to help you with setting it up. It's on an AS-IS basis.
Document version: 20110411-1127
Color key:
Green colored items are generally either safe commands or preferred options. In the case of diagnostics, if you do not run them as root, you have no chance of damaging your system. And, in most cases of the diagnostics, they are information gatherers only and do not do damage by themselves.
Yellow colored items are generally moderate severity. Only change them if you are reasonably certain you know what you're doing. On preferences (especially in the video post section), yellow colored items indicate that these are less optimal ideally, but may work for you just fine. Always write down the original values before you make the changes so you can restore them later if you need to do so.
Orange colored items are generally considered only slightly less severe or bad than red. (They rarely appear in this documentation for the simple fact that it's a quick and slippery slope from yellow to red.)
Red colored items are generally dangerous if not used properly. Only change them if nothing else works and if you've confirmed your hardware supports the configuration you're trying. As with the moderate severity items above, be sure to write down pre-change values to restore them in the event you need to do so.
Comments
So, this will cover a few useful and, to a greater extent, lesser known utilities. (If you're an expert Linux user, you already know this so I don't mean you.)
Note, as with all of my instructions, whether or not a package or command is available for your distribution depends entirely on the distribution and/or whether or not you installed it. Research may be necessary on your part to determine what package, if any, is available for your particular distribution to obtain access to the commands.
For example, as root (you may need to run this by prefixing the command with 'sudo' depending on your distribution): # lsof | grep bash
That will give you a list of files either opened by a bash process and/or any processes matching the word 'bash'.
Use in STO's case: Determine if a program, like wine, has files open even though the program may not be on your screen. Example: Wine crashes but you can't start it up and don't see any error messages. Example #2: Wine seemingly stalls during the installation process of STO but is in fact running normally (during the directx phase of the install is a prime target).
You can also use grep on a file instead of grabbing data passed to it through stdin (often referred to in longer form as 'standard in'): # grep root /etc/passwd
Use in STO's case: Filter through the results of lsof, as listed above, to make it easier to search for open files.
Example usage: # watch "grep MHz /proc/cpuinfo"
Use in STO's case: Use it to watch the files being opened, written to, and otherwise used during the installation process. (Not the only usage mind you, but it's a good one.) Used in combination with the above lsof and grep commands, this is particularly useful.
Example usage (which may or may not require root depending on your distribution): # dmesg
In this example, I am only including a small snippet of dmesg on my system (as it's otherwise quite large):
[ 0.000000] Linux version 2.6.36-ck-r5 (root@unimatrix-01) (gcc version 4.4.5 (Gentoo 4.4.5 p1.2, pie-0.4.5) ) #11 SMP Mon Mar 14 21:04:30 PDT 2011
[ 0.000000] Command line: root=/dev/ram0 init=/linuxrc ramdisk=8192 real_root=/dev/md2 amd_iommu=on iommu=memaper=3 hpet=disable
[ 0.000000] BIOS-provided physical RAM map:
Example usage: # cat /proc/cpuinfo
I'm only including a part of one of my 6 cpus just for viewing so you know what to look for:
processor : 5
vendor_id : AuthenticAMD
cpu family : 16
model : 10
model name : AMD Phenom(tm) II X6 1090T Processor
stepping : 0
cpu MHz : 800.000
cache size : 512 KB
At the top, processor 5 means as such: All system CPUs start at 0 and count upward. So, consider 0 actually the 'first CPU'. In this case, 5 means the 6th CPU in my system.
However, getting a general sense of the frame values, even if they're absurdly high is a good indicator of performance. Let this run for a good 40 seconds (or 8 iterations of 5 second counts) to get a good gauge. Keep in mind, there will always be fluctuations, even if you're not doing anything.
In my case, I have some background raid reconstruction going on while I post this (and a few other graphical applications), so my rates are a bit lower:
# glxgears
58572 frames in 5.0 seconds = 11714.366 FPS
56615 frames in 5.0 seconds = 11322.979 FPS
61397 frames in 5.0 seconds = 12279.175 FPS
59466 frames in 5.0 seconds = 11893.023 FPS
73289 frames in 5.0 seconds = 14657.703 FPS
71782 frames in 5.0 seconds = 14356.343 FPS
74777 frames in 5.0 seconds = 14955.308 FPS
78888 frames in 5.0 seconds = 15777.028 FPS
^C
As I mention in the video section, your CPU speed does have an impact on your framerate in OpenGL. Moreso than the windows equivalent (or GDI in wine). glxgears is OpenGL - NOT DirectX so it is CPU bound and single threaded (which is to say it only runs on one processor).
So, setting my cpus to ondemand instead of performance and re-running the same test:
# glxgears
62997 frames in 5.0 seconds = 12599.400 FPS
63299 frames in 5.0 seconds = 12659.453 FPS
43911 frames in 5.0 seconds = 8782.146 FPS
47759 frames in 5.0 seconds = 9551.406 FPS
51825 frames in 5.0 seconds = 10364.539 FPS
55477 frames in 5.0 seconds = 11095.091 FPS
43727 frames in 5.0 seconds = 8745.145 FPS
53140 frames in 5.0 seconds = 10627.866 FPS
You can see the performance goes up and down. But, generally, it is definitely slower with less CPU driving the application. (this tradeoff is not as noticeable in DirectX in Windows or in GDI with wine)
$ glxinfo | grep renderer
OpenGL renderer string: GeForce GTX 295/PCI/SSE2
Taking a snippet out from the output:
Handle 0x0002, DMI type 2, 15 bytes
Base Board Information
Manufacturer: ASUSTeK Computer INC.
Product Name: M4N98TD EVO
That is the correct board.
The default configuration shows total CPU usage, disk reads/writes in aggregate, network send and receives in aggregate, swap file paging (in and out), as well as system interrupt and waits. This is particularly useful if you're trying to determine what, if anything, is going on. A particularly good usage of this command is during the STO install (again during the DirectX portion of the install) where you're not sure what's going on because it seemingly stalls but hasn't.
Example: # dstat
You can also sort by any of the columns - using the "?" question mark key for help on the syntax and keystrokes needed to do this.
Note, some distributions allow you to save top settings. Though, don't necessarily count on it. Not all do.
Like top, it can sort based on preference. Though, you don't need to fish through commands to do this. Just click on the column headers and it will sort accordingly. Like top, you can also kill and renice processes through this utility.
The parameters I include here will be for the default generic Linux kernel that most, if not all, should have available.
Provided you want to test any of these options prior to making them permanent in your config, you can do as follows (THESE ARE TEMPORARY AND DO NOT APPLY AFTER REBOOT):
Reboot your system.
Enter grub (lilo does not have as much in the way of doing so, but sometimes it works by putting: append="parameters" after your kernel name)
Go to the version of the kernel you wish to boot with (using the arrow keys)
Press E to 'edit'
Go to the end of the line and add these parameters
Press Enter to end editing
Press B to 'boot' with the parameters in question
Intel AND AMD systems: iommu=on usually is enabled by default.
If you're sure that you have IOMMU enabled in your bios but you're not seeing the value in your dmesg, you'll want to enable it one of two ways. Operating under the assumption you have this feature enabled in your bios.
(For example, in my distribution of Gentoo, I was able to confirm it this way):
unimatrix-01 ~ # grep IOMMU /etc/kernels/kernel-config-x86_64-2.6.36-ck-r5
CONFIG_GART_IOMMU=y
# CONFIG_CALGARY_IOMMU is not set
CONFIG_AMD_IOMMU=y
CONFIG_AMD_IOMMU_STATS=y
CONFIG_IOMMU_HELPER=y
CONFIG_IOMMU_API=y
# CONFIG_IOMMU_DEBUG is not set
# CONFIG_IOMMU_STRESS is not set
(Your distribution may have this file located in /boot instead and by another name. Sometimes as /boot/config-KERNELVERSIONHERE (replace KERNELVERSION with the version on your system. If in doubt, just: "ls /boot/config*" If that also fails, some distributions have it built into the proc system. Check to see if /proc/config.gz exists. If it does, just do a: "zgrep IOMMU /proc/config.gz")
The lines bolded are the relevant examples here.
If you're not sure which values are supported, make sure you have the kernel sources installed for your distribution and check this file (it may vary depending on your distribution): /usr/src/linux/Documentation/kernel-parameters.txt
Another possible value you can use, especially if you know your vendor, like ASUS, supports IOMMU by default but doesn't give you the option to change the values on how much memory is in that allocation:
iommu=memaper=3
Make sure you enter it like that. Value of 3 says "256MB" for IOMMU. 2 means 128MB. 1 means 64MB.
Verify it's right in the dmesg:
[ 0.000000] Your BIOS doesn't leave a aperture memory hole
[ 0.000000] Please enable the IOMMU option in the BIOS setup
[ 0.000000] This costs you 256 MB of RAM
[ 0.000000] Mapping aperture over 262144 KB of RAM @ 20000000
[ 0.818077] PCI-DMA: Disabling AGP.
[ 0.818526] PCI-DMA: aperture base @ 20000000 size 262144 KB
[ 0.818644] PCI-DMA: using GART IOMMU.
[ 0.818740] PCI-DMA: Reserving 256MB of IOMMU area in the AGP aperture
Note: Whatever preferences you have for video cards are, there is one truth at the moment: Nvidia has the best 'Unix' experience for video card configuration and drivers (and have forums dedicated to this if you need to address Linux/Free BSD/Solaris questions with respect to their drivers) regardless of your distribution or Unix flavor. As such, it is my experience and greatest recommendation that if you use any video cards and gaming for Wine, use Nvidia (GeForce in particular is their preferred flagship of cards for gaming).
Sometimes, the distribution (as is the case with most Redhat/Fedora based systems) tells you to obtain the drivers from the website directly.
Other distributions, like Gentoo, will package them for you. (Either edit your /etc/make.conf to include VIDEO_CARDS="fglrx" and/or emerge ati-drivers)
Sometimes, the distribution (as is the case with most Redhat/Fedora based systems) tells you to obtain the drivers from the website directly.
Other distributions, like Gentoo, will package them for you. (Either edit your /etc/make.conf to include VIDEO_CARDS="nvidia" and/or emerge nvidia-drivers)
The choice on which is largely up to you and what your hardware supports. I'd say go with the default (GDI) first. If that doesn't work for you, try OpenGL in your wine settings (Winetricks for this). Go with the one that works the best for your situation. At the moment, GDI appears to be winning for STO.
Most of these buffers use the Byte system for calcutions in 512byte increments. For edification, 1024 bytes = 1k and 512 bytes = 0.5k. You may have to double your values in order to achieve the right amount. By default, most read-ahead buffers use 128k (or 256). So, 4M would be 8192 (4096 * 2).
In order to determine what the optimal usage for your system is, I'd recommend the Phoronix Test Suite (google that). There are performance benchmarks you can do that will help you figure out what ideal usage is. For most purposes, I found a 4M readahead buffer (per stripe group) to be ideal (mdadm apparently shares this belief).
Locate the volume you use wine on. On this test CentOS (redhat derivative) system, I will give you an example:
[root@hivemind ~]# mount | grep Vol
/dev/mapper/VolGroup00-LogVol00 on / type ext4 (rw,noatime)
[root@hivemind ~]# lvdisplay /dev/mapper/VolGroup00-LogVol00 | grep Read
Read ahead sectors 256
Note where it says "Read ahead sectors". Remember, this is again in 512 byte increments. So, 256 / 2 = 128k. That's not ideal at all. To change it immediately and make this change persistent on reboot, it's one command:
[root@hivemind ~]# lvchange -r 8192 /dev/VolGroup00/LogVol00
Logical volume "LogVol00" changed
Now, check it again:
[root@hivemind ~]# lvdisplay /dev/mapper/VolGroup00-LogVol00 | grep Read
Read ahead sectors 8192
That's it. You're done.
Most distributions will at least bump this value, if you use the automated partitioning/install tools, to 128k. Still, quite crappy. Ideal benchmarks suggest one of two possible values are useful: 256k (for smaller files) or 1M (for larger files - like the game uses). Unfortunately, newer distributions can also place the value right in the dead zone for performance: 512k. So, you have a few options on combatting this.
You can A.) reboot to distribution rescue cd, backup your file system, unmount the filesystem, stop the md device, zero the superblocks on the partitions, create a new md device with the right chunk value, install a new file system on the recreated raid partition, restore the filesystem, adjust appropriate config files, and reboot normally (it's a pain!) or B.) go with a hack to work around the problem and deal with it the next time you setup your system.
For most folks, I recommend option B. It isn't intrusive and it is pretty easy to do with a LOT less negative impact. In fact, you can find the value that works for you. If you're not happy, you can just set it back to the original value and no harm done.
To find out what the readahead is for your md raid partition: # blockdev --getra /dev/md2 (Replace /dev/md2 with your raid partition!)
[root@unimatrix-01 ~]# blockdev --getra /dev/md0
256
That's less than ideal. You can change the readahead on the fly without a reboot, but note the change is NOT permanent. If you find a value that works for you, you'll need to put this in a startup script.
[root@unimatrix-01 ~]# blockdev --setra 8192 /dev/md0
[root@unimatrix-01 ~]# blockdev --getra /dev/md0
8192
Voila. 4M readahead.
[root@unimatrix-01 ~]# blockdev --getra /dev/sda3
256
[root@unimatrix-01 ~]# blockdev --setra 8192 /dev/sda3
[root@unimatrix-01 ~]# blockdev --getra /dev/sda3
8192
Reference URLs: http://en.wikipedia.org/wiki/Con_Kolivas and http://users.on.net/~ckolivas/kernel/
This tends to make the system a bit more responsive under heavier loads and/or video.
---
There is one other option outside of the kernel: schedtool. Sometimes distributions include this and some do not. This particular tool has various methods for scheduling a program. Indeed, the documentation suggests some applications benefit more or less from different levels of priorities. Also note, you should not use this system-wide. Only use it on applications you want to make more interactive, or less. (batch/cron jobs, for example, require a lot less interactivity.)
Interactive is used for gaming, hd movies, etc. For this reason, I recommend using Interactive mode for Wine. Example usage: schedtool -I -e wine Star\ Trek\ Online.exe
In my tests, it improved glxgears scores about 10-15%. And, of course, Star Trek Online felt fairly responsive as well.
--
Filesystems are also a consideration. You might not think so, but reading and writing files can incur penalties depending on the filesystem you use. For the purposes of this documentation, I will color the higher performance ones which are generally stable and available in most distributions.
Note, all filesystems (regardless of OS) are subject to fragmentation. Unix (and by extension the filesystem) does a pretty good job of managing this most of the time. However, it is not immune to the problem - even though it does occur much less frequently than a comparable Windows environment.
--
Another place you can improve your performance in a very small way, but appreciable nonetheless, is through compiler options. I am not going to propose or post any 'aggressive' optimizations. I am going to post 'what works' and tell you the caveats.
All distributions share one common flag for stability: -O2
Do not use -O3 (or higher - while gcc accepts higher numbers, it stops at -O3) for any reason as it contains aggressive and dangerous flags that will break some applications horribly (even if they compile OK) and in unexpected ways.
If you want to compile the package to make certain the instruction sets are optimized for your CPU, there are two flags (and 3 if you like) you can use. But before addressing them, I'll explain why they're present and why they are not used in distributions in general. Every distribution tries to make an architecture specific package. For the sake of 64 bit systems, I'll say amd64 bit is the arch (and it is for 64 bit systems - even intel). So, they compile the binaries to work on the widest possible number of CPUs possible. So, -mtune=generic is used during compilation. These are optimizations that work on all platforms in that architecture.
This can miss some key optimizations both for 32 and 64 bit binaries/libraries. While not earth shattering when done on a single package, when done system-wide, it's a noticeable improvement when you can use optimizations native to your system. However, doing so is extremely tedious from an end-user perspective. And distributions out there do not like to create branchpoints and downloads for every possible cpu combination gcc provides an option to optimize for.
In addition to this, once you optimize your programs for your CPU type, they're generally not compatible on older models/classes of your processor or that of the competitor. So, if you optimize programs for an AMD Phenom II, for example, you cannot use them on an Intel i7 or even an AMD FX processor. You could, however, use it on any of the Barcelona class AMD processors (or what they consider amdfam10 in gcc).
So, to compile or optimize for processor instructions that are available on your class CPU: -march=native -mtune=native. If you're compiling for a 32 bit binary, add -m32. For a 64 bit binary -m64. Note, you should generally leave it up to your distribution to determine how to compile for 32bit vs 64bit. (Gentoo users, just put your march and mtune options in the /etc/make.conf, emerge will take care of determining whether to make a binary 32 or 64 bit. Leave the m32 and m64 options out.)
STO currently (and most games) is currently released as a 32 bit binary and software installation. wine has 2 versions: 64 bit and 32 bit. 64 bit will not generally run 32 bit cleanly (and says as much in their documentation). So, for sanity, troubleshooting, and stability, run 32 bit wine. (wine.i686 in newer Redhat derivatives - and in Gentoo, just use the -win64 USE flag to keep it from building 64 bit wine.)
--
One frequently asked question, particularly with regards to games of the 3D variety is "why does the game crash after a few hours of playing?" The answer is multi-fold. Your video settings play into this. Max settings will speed this up and it will occur more frequently. But, here is the answer in parts:
First, the STO binary in and of itself is not large address aware. Which is to say that even though you have a 64 bit OS, the game runs in a 32 bit space - and is restricted to 4G of RAM (more accurately, about 3.2 - 3.5). Even if you have 16G of system memory, your game will never use more than 3.5-4G. It will run into an invisible wall with respect to performance if you have your settings turned up at some point or another. Note, STO is not the only game that suffers from this. Believe it or not, WoW (and a lot of other 3D heavy MMOs) does as well.
The solution is for STO (and similar games) to eventually make the games large address aware - with the caveat earlier OSes (like WinXP) will probably not be able to run STO. For the purposes of this documentation, I will not include methods which will alter the binary. I won't include material that will violate the EULA (don't ask in game or via mail either, I will not respond (except maybe to laugh)).
Second, which plays into the first reason, the game continues to use more memory over the course of time. This happens every time you change zones and/or encounter more people/objects (NPCs/etc) in a zone. The higher your graphics settings and the more objects around you to buffer, the more memory the game uses.
Windows 7 gets around this by managing its memory differently - a hack to make 32 bit binaries compatible - which doesn't honor the way 32bit and 64bit binaries should behave in general. It's not necessarily a problem wine is creating so much as honoring the 'right'[tm] way 32bit vs 64bit should work.
My personal preference is go with "Start with the default" options and work from there. Game developers are pretty smart in their choice of "recommended" settings. They assume middle of the road system setups. If you know your hardware and setup are better than average, then you can tweak upward from there. Conversely, if you know your setup is less than ideal, you can pare your settings down from default.
As such, my recommendation is to go with the recommended default settings for video. With a few additional rules.
If you prefer a barebones approach, this may help: STO Performance and Frame Rate Guide v1.0 . The recommended settings in this link will pare the game down to the lowest possible values and make it otherwise playable for you.
If you prefer a maximum settings approach, I still recommend my previous list above - with the other settings at maximum. The issue is with shadowed lights and frame stabilization. Middle of the road or maximum settings share those 3 options. The rest is flexible.
As mentioned in the previous post, a 4M readahead buffer (per stripe group) works best in general. The caveat in parenthesis may confuse some. So I'll explain that a little better here. However, note, the actual value may depend on your stripe size for your raid. But, I will start off by explaining to the LVM and flat partition users something else.
The exception to this rule is those folks using on-board raid chips on their motherboard. You're using a software raid at that point (sorry, but it's sadly true). It then causes the kernel and installation method to choose a 'dmraid' (device mapper) which then creates a software version of raid - just as if you had done it manually - for you. So, if your on-board raid chip is set to do any level of raid, you want to go to the mdadm section and read further.
LVM and flat partition users, your life is somewhat easier. You don't have to worry about striping or chunk sizes. In your case, you'll just use a flat value of 8192 (or 4M) for your readahead. Noting, of course, that if you find a value that works better for your configuration, you can skip the rest of this post as it does not apply to you.
- Raid level
- Chunk size
- Number of disks in the array you're looking at.
The value of your readahead is going to change depending on your answers. As such, the following section will describe useful values for you.(256k * 4) = 1M
1M = 1024k
1024k * 2 = 2048 readahead
If you chose 512k as your chunk size:
(512k * 4) = 2M
2M = 2048k
2048k * 2 = 4096 readahead
Typical onboard raid controllers have these as their values, however:
64k value
(64k * 4) = 256k
256k * 2 = 512 readahead
or 128k value
(128k * 4) = 512k
512k * 2 = 1024 readahead
1024k * 4 {mirrors} * 2 {stripes} = 8M
8M readahead = 8192k
8192k * 2 = 16384 readahead
512k chunk and 4 disks:
512k * 4 {mirrors} * 2 {stripes} = 4M
4M readahead = 4096
4096 * 2 = 8192 readahead
256k chunk and 4 disks:
256 * 4 {mirrors} * 2 {stripes} = 2M
2048 * 2 = 4096 readahead
128k chunk and 4 disks:
128k * 4 {mirrors} * 2 {stripes} = 1M
1024 * 2 = 2048 readahead
64k chunk and 4 disks:
64k * 4 {mirrors} * 2 {stripes) = 512k
512k * 2 = 1024 readahead
1024k * 2 {mirrors} * 4 {stripes} = 8M
8M readahead = 8192k
8192k * 2 = 16384 readahead
512k chunk and 4 disks:
512k * 2 {mirrors} * 4 {stripes} = 4M
4M readahead = 4096
4096 * 2 = 8192 readahead
256k chunk and 4 disks:
256 * 2 {mirrors} * 4 {stripes} = 2M
2048 * 2 = 4096 readahead
128k chunk and 4 disks:
128k * 2 {mirrors} * 4 {stripes} = 1M
1024 * 2 = 2048 readahead
64k chunk and 4 disks:
64k * 2 {mirrors} * 4 {stripes) = 512k
512k * 2 = 1024 readahead
Raid considerations - As you can see in the previous post, different raid configurations require different settings. So, how do you choose what's best for you?
For my system, I have 4 1Terabyte Seagate 7200 RPM Sata drives (with 32M onboard cache). I didn't like the idea of using the onboard controller (and really still don't) because it doesn't let me choose configurations which might be a bit wiser for my usage. I'll explain as such.
Linux distributions don't like for /boot to be on any partition it can't read immediately and without a minimal amount of overhead and kernel options. So, how do I accomplish redundancy and still manage to give the kernel what it wants?
Simple. I partitioned each of the 4 drives accordingly:
partition 1 {boot} is a 256M partition on all 4 drives.
partition 2 {swap) is a 2G partition on all 4 drives. (I'll get to why in a second)
partition 3 {root} is a 50G partition on all 4 drives.
partition 4 {extended partition - due to how dos partitioning schemes work}
--partition 5 {/home} is a roughly 0.9T partition (technically, this is the 4th partition, but it shows as 5.)
I then configured each partition as a raid autodetect (type: fd in fdisk).
For partition 1, I wanted it to meet the Linux kernel requirements but still provide redundancy in event of drive failure. So, I made partition 1 on all drives a raid 1 (mirroring) array. This means if I make changes to one drive, all of the other see it. Indeed, you can then install your boot loader on each of the disks and still boot your system with no additional configuration following.
Because the chunk size doesn't matter on the boot partition (it's fairly small anyway), I used:
# mdadm -C /dev/md0 -l 1 -n 4 --bitmap=internal -e 0.90 /dev/sd[a-d]1
Voila. If you're using Redhat derivatives (Fedora, CentOS, Scientific Linux, Mandriva, Mandrake), you can do this in the GUI by specifying your own custom layout during initial installation of the OS, creating each of the partitions as 'software raid' but with similar layouts to the above.
Note: If you're using Redhat derivatives, make certain that you use ext4 (for newer releases) or ext3 (for older releases) on your boot partition as the filesystem. Since I don't use a redhat derivative, I chose xfs for mine. See the above post regarding filesystems and the "optimal" ones.
For partition 2, this is the swap partition. I know a few people are scratching their heads going "why 2G"? That's easy. It's not going to be 2G in the end. With a swap partition, you don't want all of your I/O on one disk if you can avoid it. Especially with a raid. And since this data is NOT volatile (in this context, that means, it doesn't matter if it becomes corrupt, you can always reformat it with zero loss to you), it made raid 0 the perfect choice for this configuration.
# mdadm -C /dev/md1 -l 0 -n 4 -e 0.90 /dev/sd[a-d]2
Now, when you go to use the swap space, it will spread I/O out across the disks rather than a single one. You'll thank yourself later for doing that. In addition, that 2G because 8G because it's 2+2+2+2 in raid 0. So, instead of 2G of swap space, I actually have 8. (I also think the system auto-defaulted to 512k chunk size for this particular array.)
For partition 3, this is the root partition. This is where most of your OS (but not personal files) will sit. I decided that raid 10 was ideal. However, I didn't necessarily need massive chunks as I would with the home partition later. I also wanted it to be setup in 'near' mode (as mentioned in my previous advanced raid configuration post) So, I set it up like this:
# mdadm -C /dev/md2 -l 10 -n 4 -p n2 --bitmap=internal --chunk=256k -e 0.90 /dev/sd[a-d]3
256k is the sweet spot if you're using smaller files. On my system, that proved to be smart since I do a lot of compiling in my Linux distribution. Smaller chunks are better for that partition due to the large amount of smaller files I access frequently.
That 50G on the 4 drives in raid 10 config became 100G for the root partition.
Note: If you're using Redhat derivatives, make certain that you use ext4 (for newer releases) or ext3 (for older releases) on your root partition as the filesystem. Since I don't use a redhat derivative, I chose xfs for mine. See the above post regarding filesystems and the "optimal" ones.
For partition 5, this is the home partition. This is where most of your personal files will sit. I decided that raid 10 was also ideal here. However, unlike the root partition, I would have significantly larger files here. The .hogg files in this game are particularly huge and I use them frequently. Unlike the root partition, I wanted this one in 'far' mode for raid 10.
# mdadm -C /dev/md3 -l 10 -n 4 -p f2 --bitmap=internal --chunk=1024k -e 0.90 /dev/sd[a-d]5
Note, for all of these, I also used "--assume-clean" so that it wouldn't attempt to rebuild the array after I created it. Whether or not you use it is up to you. The 'bitmap' flag is important too (except in the case of md1 - the swap partition. You don't care if data there is permanent). What this says is "If you need to rebuild, use the stopping point kept in the metadata" to start your rebuild from when you stop and restart the raid array again. This way, you don't have to rebuild the array from the start every time you stop it (for example, during a rebuild).
That 0.9T partition became 1.8T for the home partition in raid 10.
Note: You may use whatever filesystem is supported by your distribution for this. Either ext4 or xfs are ideal candidates.. Naturally, I chose xfs (when given a choice between the comparable performance of it vs ext4, what made the deciding vote was an established defragmentation tool (xfs_fsr)).
As you can see from the configurations I made, I chose different optimizations and layouts depending on use. This allows me to use each partition at it's optimal configuration without sacrificing performance for the uses they provide.
* 20110411-1127: Added Software caveats / version page.
Note, you do have a second option here. If you find that your distribution has patches which diminish or alter the wine stability in a negative way, you can read the documentation and try to compile the latest version from git source. I'd recommend that you only use this option as a last resort. As an example, Gentoo's mousewarp patch caused weird behavior with 1.3.15 and 1.3.16 of wine for me - breaking my scroll wheel function in STO even though it worked fine outside of wine and prior to these versions. So, I went with the git version - it did not contain the patch that broke my functionality.