Header: I originally wrote this guide about 3.25 years ago. It has since gone into the archive (which is fine) - and this thread updates a few details which have changed since then.
Disclaimer:
Cryptic / Perfect World does not provide support for this platform/application, so do not ask them to help you with setting it up. It's on an AS-IS basis.
Document version: 20140701-2000
Color key:
Green colored items are generally either safe commands or preferred options. In the case of diagnostics, if you do not run them as root, you have no chance of damaging your system. And, in most cases of the diagnostics, they are information gatherers only and do not do damage by themselves.
Yellow colored items are generally moderate severity. Only change them if you are reasonably certain you know what you're doing. On preferences (especially in the video post section), yellow colored items indicate that these are less optimal ideally, but may work for you just fine. Always write down the original values before you make the changes so you can restore them later if you need to do so.
Orange colored items are generally considered only slightly less severe or bad than red. (They rarely appear in this documentation for the simple fact that it's a quick and slippery slope from yellow to red.)
Red colored items are generally dangerous if not used properly. Only change them if nothing else works and if you've confirmed your hardware supports the configuration you're trying. As with the moderate severity items above, be sure to write down pre-change values to restore them in the event you need to do so.
Comments
The parameters I include here will be for the default generic Linux kernel that most, if not all, should have available.
Provided you want to test any of these options prior to making them permanent in your config, you can do as follows (THESE ARE TEMPORARY AND DO NOT APPLY AFTER REBOOT):
Reboot your system.
Enter grub (lilo does not have as much in the way of doing so, but sometimes it works by putting: append="parameters" after your kernel name)
Go to the version of the kernel you wish to boot with (using the arrow keys)
Press E to 'edit'
Go to the end of the line and add these parameters
Press Enter to end editing
Press B to 'boot' with the parameters in question
Intel AND AMD systems: iommu=on usually is enabled by default.
If you're sure that you have IOMMU enabled in your bios but you're not seeing the value in your dmesg, you'll want to enable it one of two ways. Operating under the assumption you have this feature enabled in your bios.
(For example, in my distribution of Gentoo, I was able to confirm it this way):
mars ~ # grep IOMMU /etc/kernels/kernel-config-x86_64-3.14.4-ck
CONFIG_GART_IOMMU=y
# CONFIG_CALGARY_IOMMU is not set
CONFIG_IOMMU_HELPER=y
CONFIG_IOMMU_API=y
CONFIG_IOMMU_SUPPORT=y
CONFIG_AMD_IOMMU=y
CONFIG_AMD_IOMMU_STATS=y
CONFIG_AMD_IOMMU_V2=y
CONFIG_INTEL_IOMMU=y
# CONFIG_INTEL_IOMMU_DEFAULT_ON is not set
CONFIG_INTEL_IOMMU_FLOPPY_WA=y
# CONFIG_IOMMU_DEBUG is not set
# CONFIG_IOMMU_STRESS is not set
(Your distribution may have this file located in /boot instead and by another name. Sometimes as /boot/config-KERNELVERSIONHERE (replace KERNELVERSION with the version on your system). If in doubt, just: "ls /boot/config*" If that also fails, some distributions have it built into the proc system. Check to see if /proc/config.gz exists. If it does, just do a: "zgrep IOMMU /proc/config.gz")
The lines bolded are the relevant examples here.
If you're not sure which values are supported, make sure you have the kernel sources installed for your distribution and check this file (it may vary depending on your distribution): /usr/src/linux/Documentation/kernel-parameters.txt
Another possible value you can use, especially if you know your vendor, like ASUS, supports IOMMU by default but doesn't give you the option to change the values on how much memory is in that allocation:
iommu=memaper=3
Make sure you enter it like that. Value of 3 says "256MB" for IOMMU. 2 means 128MB. 1 means 64MB.
Verify it's right in the dmesg:
[ 0.000000] Your BIOS doesn't leave a aperture memory hole
[ 0.000000] Please enable the IOMMU option in the BIOS setup
[ 0.000000] This costs you 256 MB of RAM
[ 0.000000] Mapping aperture over 262144 KB of RAM @ 20000000
[ 0.818077] PCI-DMA: Disabling AGP.
[ 0.818526] PCI-DMA: aperture base @ 20000000 size 262144 KB
[ 0.818644] PCI-DMA: using GART IOMMU.
[ 0.818740] PCI-DMA: Reserving 256MB of IOMMU area in the AGP aperture
Note: Whatever preferences you have for video cards are, there is one truth at the moment: Nvidia has the best 'Unix' experience for video card configuration and drivers (and have forums dedicated to this if you need to address Linux/Free BSD/Solaris questions with respect to their drivers) regardless of your distribution or Unix flavor. As such, it is my experience and greatest recommendation that if you use any video cards and gaming for Wine, use Nvidia (GeForce in particular is their preferred flagship of cards for gaming).
Sometimes, the distribution (as is the case with most Redhat/Fedora based systems) tells you to obtain the drivers from the website directly.
Other distributions, like Gentoo, will package them for you. (Either edit your /etc/make.conf to include VIDEO_CARDS="fglrx" and/or emerge ati-drivers)
Sometimes, the distribution (as is the case with most Redhat/Fedora based systems) tells you to obtain the drivers from the website directly.
Other distributions, like Gentoo, will package them for you. (Either edit your /etc/make.conf to include VIDEO_CARDS="nvidia" and/or emerge nvidia-drivers)
The choice on which is largely up to you and what your hardware supports. I'd say go with the default (opengl) first. If that doesn't work for you, try gdi in your wine settings (Winetricks for this). Go with the one that works the best for your situation. At the moment, opengl appears to be winning for STO.
Most of these buffers use the Byte system for calcutions in 512byte increments. For edification, 1024 bytes = 1k and 512 bytes = 0.5k. You may have to double your values in order to achieve the right amount. By default, most read-ahead buffers use 128k (or 256). So, 4M would be 8192 (4096 * 2).
In order to determine what the optimal usage for your system is, I'd recommend the Phoronix Test Suite (google that). There are performance benchmarks you can do that will help you figure out what ideal usage is. For most purposes, I found a 4M readahead buffer (per stripe group) to be ideal (mdadm apparently shares this belief).
Locate the volume you use wine on. On this test CentOS (redhat derivative) system, I will give you an example:
[root@hivemind ~]# mount | grep Vol
/dev/mapper/VolGroup00-LogVol00 on / type ext4 (rw,noatime)
[root@hivemind ~]# lvdisplay /dev/mapper/VolGroup00-LogVol00 | grep Read
Read ahead sectors 256
Note where it says "Read ahead sectors". Remember, this is again in 512 byte increments. So, 256 / 2 = 128k. That's not ideal at all. To change it immediately and make this change persistent on reboot, it's one command:
[root@hivemind ~]# lvchange -r 8192 /dev/VolGroup00/LogVol00
Logical volume "LogVol00" changed
Now, check it again:
[root@hivemind ~]# lvdisplay /dev/mapper/VolGroup00-LogVol00 | grep Read
Read ahead sectors 8192
That's it. You're done.
Most distributions will at least bump this value, if you use the automated partitioning/install tools, to 128k. Still, quite crappy. Ideal benchmarks suggest one of two possible values are useful: 256k (for smaller files) or 1M (for larger files - like the game uses). Unfortunately, newer distributions can also place the value right in the dead zone for performance: 512k. So, you have a few options on combatting this.
You can A.) reboot to distribution rescue cd, backup your file system, unmount the filesystem, stop the md device, zero the superblocks on the partitions, create a new md device with the right chunk value, install a new file system on the recreated raid partition, restore the filesystem, adjust appropriate config files, and reboot normally (it's a pain!) or B.) go with a hack to work around the problem and deal with it the next time you setup your system.
For most folks, I recommend option B. It isn't intrusive and it is pretty easy to do with a LOT less negative impact. In fact, you can find the value that works for you. If you're not happy, you can just set it back to the original value and no harm done.
To find out what the readahead is for your md raid partition: # blockdev --getra /dev/md2 (Replace /dev/md2 with your raid partition!)
[root@unimatrix-01 ~]# blockdev --getra /dev/md0
256
That's less than ideal. You can change the readahead on the fly without a reboot, but note the change is NOT permanent. If you find a value that works for you, you'll need to put this in a startup script.
[root@unimatrix-01 ~]# blockdev --setra 8192 /dev/md0
[root@unimatrix-01 ~]# blockdev --getra /dev/md0
8192
Voila. 4M readahead.
[root@unimatrix-01 ~]# blockdev --getra /dev/sda3
256
[root@unimatrix-01 ~]# blockdev --setra 8192 /dev/sda3
[root@unimatrix-01 ~]# blockdev --getra /dev/sda3
8192
Reference URLs: http://en.wikipedia.org/wiki/Con_Kolivas and http://users.on.net/~ckolivas/kernel/
This tends to make the system a bit more responsive under heavier loads and/or video.
---
There is one other option outside of the kernel: schedtool. Sometimes distributions include this and some do not. This particular tool has various methods for scheduling a program. Indeed, the documentation suggests some applications benefit more or less from different levels of priorities. Also note, you should not use this system-wide. Only use it on applications you want to make more interactive, or less. (batch/cron jobs, for example, require a lot less interactivity.)
Interactive is used for gaming, hd movies, etc. For this reason, I recommend using Interactive mode for Wine. Example usage: schedtool -I -e wine Star\ Trek\ Online.exe
In my tests, it improved glxgears scores about 10-15%. And, of course, Star Trek Online felt fairly responsive as well.
--
Filesystems are also a consideration. You might not think so, but reading and writing files can incur penalties depending on the filesystem you use. For the purposes of this documentation, I will color the higher performance ones which are generally stable and available in most distributions.
Note #1: All filesystems (regardless of OS) are subject to fragmentation. Unix (and by extension the filesystem) does a pretty good job of managing this most of the time. However, it is not immune to the problem - even though it does occur much less frequently and certainly less noticeably so than a comparable Windows environment.
Note #2: On systems with SSD drives, you will not want to defragment with xfs. This is not a limitation of the filesystem (as it will do that just fine). More to the point, SSD drives generally do NOT need to be defragmented as it does not improve their performance and only shortens the life of those drives. Magnetic/standard rotational disks, however, will see measurable and perceivable benefit from defragmentation as the technology is significantly different.
--
Another place you can improve your performance in a very small way, but appreciable nonetheless, is through compiler options. I am not going to propose or post any 'aggressive' optimizations. I am going to post 'what works' and tell you the caveats.
All distributions share one common flag for stability: -O2
Do not use -O3 (or higher - while gcc accepts higher numbers, it stops at -O3) for any reason as it contains aggressive and dangerous flags that will break some applications horribly (even if they compile OK) and in unexpected ways.
If you want to compile the package to make certain the instruction sets are optimized for your CPU, there are two flags (and 3 if you like) you can use. But before addressing them, I'll explain why they're present and why they are not used in distributions in general. Every distribution tries to make an architecture specific package. For the sake of 64 bit systems, I'll say amd64 bit is the arch (and it is for 64 bit systems - even intel). So, they compile the binaries to work on the widest possible number of CPUs possible. So, -mtune=generic is used during compilation. These are optimizations that work on all platforms in that architecture.
This can miss some key optimizations both for 32 and 64 bit binaries/libraries. While not earth shattering when done on a single package, when done system-wide, it's a noticeable improvement when you can use optimizations native to your system. However, doing so is extremely tedious from an end-user perspective. And distributions out there do not like to create branchpoints and downloads for every possible cpu combination gcc provides an option to optimize for.
In addition to this, once you optimize your programs for your CPU type, they're generally not compatible on older models/classes of your processor or that of the competitor. So, if you optimize programs for an AMD Phenom II, for example, you cannot use them on an Intel i7 or even an AMD FX processor. You could, however, use it on any of the Barcelona class AMD processors (or what they consider amdfam10 in gcc).
So, to compile or optimize for processor instructions that are available on your class CPU: -march=native -mtune=native. If you're compiling for a 32 bit binary, add -m32. For a 64 bit binary -m64. Note, you should generally leave it up to your distribution to determine how to compile for 32bit vs 64bit. (Gentoo users, just put your march and mtune options in the /etc/make.conf, emerge will take care of determining whether to make a binary 32 or 64 bit. Leave the m32 and m64 options out.)
--
There is another less considered option for SSD drives is to change your I/O scheduler. Note, this only applies if you have SSD drives. Doing this on standard rotational / magnetic drives will make your performance worse.
You can change your I/O scheduler to either deadline or noop. Right now, there's something of a discussion going on as to which is better. Some environments suggest one is better than the other. Either way, BOTH are significantly better than CFQ (the current default scheduler in most distributions). I currently use BFQ (as I use the CFS kernel). However, this should be relatively the same performance-wise as those two options. If you're not sure what you're using, do this to find out:
(For /dev/sda, for example):
mars ~# cat /sys/block/sda/queue/scheduler
noop deadline [bfq]
That says that bfq is my default scheduler. As I only use SSD drives in my system, it didn't make sense to compile anything else in. Your kernel and distribution may not give you the option. However, if you do see deadline or noop in that scheduler list, you can do this to enable it:
mars ~# echo "deadline" > /sys/block/sda/queue/scheduler
(Do that for each SSD device you have in your system, sdb, sdc, sdd, etc)
To make it persistent upon boot (only if you can confirm that this has indeed helped you), add the following argument to your grub kernel line: "elevator=deadline" When you reboot, this will by default use the deadline scheduler.
If you're not sure which is better, set it via the echo command as listed above. Then, run a benchmark - as well as use it in practice when playing the game and/or using your system. If it feels better and more responsive, great. If not, try another one. Generally speaking, deadline and noop should always be better ONLY for SSD drives.
--
And lastly, if you use the nvidia drivers for Linux and use the GLShaderDisk cache option (most distributions use it by default), check to see if you have a ".nv" folder in your home directory. If you do, it can sometimes hit your harddrive(s) and/or raid arrays each time it writes/reads these cached shader compiled data files from the disk. While it is intended to be faster, disk I/O is almost never faster than memory. So, I did something more interesting on my system. Since I'm the only one that uses my system, this only applies to me. If you have other people/accounts using X11 on your system, then you should make the decision as to whether or not you want to do this for everyone of them or just one.
mars ~ # tail -1 /etc/fstab
tmpfs /home/janeway/.nv tmpfs norelatime,uid=1000,gid=1000,size=200M,nosuid,mode=0700 1 2
As you can see, I mount /home/janeway/.nv from fstab using the tmpfs filesystem. This stores the contents in memory, rather than on disk, in the default path that the nvidia drivers look to use. However, unlike before, this is no longer hitting my hard drive(s) and/or raid partitions each time it either writes to the directory or reads from it. Instead, this is all in memory - and has the benefit of resetting everytime my system reboots (which I think is a sane choice).
mars ~ # df -h /home/janeway/.nv
Filesystem Size Used Avail Use% Mounted on
tmpfs 200M 45M 156M 23% /home/janeway/.nv
(If you use a distribution that can't do this except after a reboot, make the changes to /etc/fstab and reboot. But, make sure you originally had a ~/.nv directory first - and make sure the uid/gid apply to you as listed in the above example. If you don't know what your uid or gid are, that is outside of the scope of this documentation. (If you don't know, I can't tell you.)
STO currently (and most games) is currently released as a 32 bit binary and software installation. wine has 2 versions: 64 bit and 32 bit. Since the game only has 32-bit software at the moment, it doesn't matter which you choose. However, I am informed that 32-bit has the least number of installation problems.
--
One frequently asked question, particularly with regards to games of the 3D variety is "why does the game crash after a few hours of playing?" The answer is multi-fold. Your video settings play into this. Max settings will speed this up and it will occur more frequently. But, here is the answer in parts:
First, the STO binary in and of itself is not large address aware. Which is to say that even though you have a 64 bit OS, the game runs in a 32 bit space - and is restricted to 4G of RAM (more accurately, about 3.2 - 3.5). Even if you have 16G of system memory, your game will never use more than 3.5-4G. It will run into an invisible wall with respect to performance if you have your settings turned up at some point or another. Note, STO is not the only game that suffers from this. Believe it or not, WoW (and a lot of other 3D heavy MMOs) does as well.
The solution is for STO (and similar games) to eventually make the games large address aware - with the caveat earlier OSes (like WinXP) will probably not be able to run STO. For the purposes of this documentation, I will not include methods which will alter the binary. I won't include material that will violate the EULA (don't ask in game or via mail either, I will not respond (except maybe to laugh)).
Second, which plays into the first reason, the game continues to use more memory over the course of time. This happens every time you change zones and/or encounter more people/objects (NPCs/etc) in a zone. The higher your graphics settings and the more objects around you to buffer, the more memory the game uses.
Windows 7 gets around this by managing its memory differently - a hack to make 32 bit binaries compatible - which doesn't honor the way 32bit and 64bit binaries should behave in general. It's not necessarily a problem wine is creating so much as honoring the 'right'[tm] way 32bit vs 64bit should work.
My personal preference is go with "Start with the default" options and work from there. Game developers are pretty smart in their choice of "recommended" settings. They assume middle of the road system setups. If you know your hardware and setup are better than average, then you can tweak upward from there. Conversely, if you know your setup is less than ideal, you can pare your settings down from default.
As such, my recommendation is to go with the recommended default settings for video. With a few additional rules.
If you prefer a barebones approach, this may help: STO Performance and Frame Rate Guide v1.0 . The recommended settings in this link will pare the game down to the lowest possible values and make it otherwise playable for you.
If you prefer a maximum settings approach, I still recommend my previous list above - with the other settings at maximum. The issue is with shadowed lights and frame stabilization. Middle of the road or maximum settings share those 3 options. The rest is flexible.
As mentioned in the previous post, a 4M readahead buffer (per stripe group) works best in general. The caveat in parenthesis may confuse some. So I'll explain that a little better here. However, note, the actual value may depend on your stripe size for your raid. But, I will start off by explaining to the LVM and flat partition users something else.
The exception to this rule is those folks using on-board raid chips on their motherboard. You're using a software raid at that point (sorry, but it's sadly true). It then causes the kernel and installation method to choose a 'dmraid' (device mapper) which then creates a software version of raid - just as if you had done it manually - for you. So, if your on-board raid chip is set to do any level of raid, you want to go to the mdadm section and read further.
LVM and flat partition users, your life is somewhat easier. You don't have to worry about striping or chunk sizes. In your case, you'll just use a flat value of 8192 (or 4M) for your readahead. Noting, of course, that if you find a value that works better for your configuration, you can skip the rest of this post as it does not apply to you.
- Raid level
- Chunk size
- Number of disks in the array you're looking at.
The value of your readahead is going to change depending on your answers. As such, the following section will describe useful values for you.(256k * 4) = 1M
1M = 1024k
1024k * 2 = 2048 readahead
If you chose 512k as your chunk size:
(512k * 4) = 2M
2M = 2048k
2048k * 2 = 4096 readahead
Typical onboard raid controllers have these as their values, however:
64k value
(64k * 4) = 256k
256k * 2 = 512 readahead
or 128k value
(128k * 4) = 512k
512k * 2 = 1024 readahead
1024k * 4 {mirrors} * 2 {stripes} = 8M
8M readahead = 8192k
8192k * 2 = 16384 readahead
512k chunk and 4 disks:
512k * 4 {mirrors} * 2 {stripes} = 4M
4M readahead = 4096
4096 * 2 = 8192 readahead
256k chunk and 4 disks:
256 * 4 {mirrors} * 2 {stripes} = 2M
2048 * 2 = 4096 readahead
128k chunk and 4 disks:
128k * 4 {mirrors} * 2 {stripes} = 1M
1024 * 2 = 2048 readahead
64k chunk and 4 disks:
64k * 4 {mirrors} * 2 {stripes) = 512k
512k * 2 = 1024 readahead
1024k * 2 {mirrors} * 4 {stripes} = 8M
8M readahead = 8192k
8192k * 2 = 16384 readahead
512k chunk and 4 disks:
512k * 2 {mirrors} * 4 {stripes} = 4M
4M readahead = 4096
4096 * 2 = 8192 readahead
256k chunk and 4 disks:
256 * 2 {mirrors} * 4 {stripes} = 2M
2048 * 2 = 4096 readahead
128k chunk and 4 disks:
128k * 2 {mirrors} * 4 {stripes} = 1M
1024 * 2 = 2048 readahead
64k chunk and 4 disks:
64k * 2 {mirrors} * 4 {stripes) = 512k
512k * 2 = 1024 readahead
Raid considerations - As you can see in the previous post, different raid configurations require different settings. So, how do you choose what's best for you?
For my system, I have 4 1 Terabyte Samsung EVO SSD SATA drives (with 64M onboard cache). I didn't like the idea of using the onboard software RAID controller (and really still don't) because it doesn't let me choose configurations which might be a bit wiser for my usage. I'll explain as such.
Linux distributions don't like for /boot to be on any partition it can't read immediately and without a minimal amount of overhead and kernel options. So, how do I accomplish redundancy and still manage to give the kernel what it wants?
Simple. I partitioned each of the 4 drives accordingly:
partition 1 {boot} is a 500M partition on all 4 drives.
partition 2 {swap) is a 6.5G partition on all 4 drives. (I'll get to why in a second)
partition 3 {root} is a 250G partition on all 4 drives.
partition 4 {home} is a roughly 0.7T partition on all 4 drives
I then configured each partition as a raid autodetect (type: fd in fdisk).
For partition 1, I wanted it to meet the Linux kernel requirements but still provide redundancy in event of drive failure. So, I made partition 1 on all drives a raid 1 (mirroring) array. This means if I make changes to one drive, all of the other see it. Indeed, you can then install your boot loader on each of the disks and still boot your system with no additional configuration following.
Because the chunk size doesn't matter on the boot partition (it's fairly small anyway), I used:
# mdadm -C /dev/md0 -l 1 -n 4 --bitmap=internal -e 0.90 /dev/sd[a-d]1
Voila. If you're using Redhat derivatives (Fedora, CentOS, Scientific Linux, Mandriva, Mandrake), you can do this in the GUI by specifying your own custom layout during initial installation of the OS, creating each of the partitions as 'software raid' but with similar layouts to the above.
Note: If you're using Redhat derivatives, make certain that you use ext4 (for newer releases) or ext3 (for older releases) on your boot partition as the filesystem. Since I don't use a redhat derivative, I chose xfs for mine. See the above post regarding filesystems and the "optimal" ones.
For partition 2, this is the swap partition. I know a few people are scratching their heads going "why 6.5G"? That's easy. It's not going to be 6.5G in the end. With a swap partition, you don't want all of your I/O on one disk if you can avoid it. Especially with a raid. And since this data is NOT volatile (in this context, that means, it doesn't matter if it becomes corrupt, you can always reformat it with zero loss to you), it made raid 0 the perfect choice for this configuration.
# mdadm -C /dev/md1 -l 0 -n 4 -e 0.90 /dev/sd[a-d]2 --chunk=128k
Now, when you go to use the swap space, it will spread I/O out across the disks rather than a single one. You'll thank yourself later for doing that. In addition, that 6.5G is because it is 24G in the end because it's 6.5+6.5+6.5+6.5 in raid 0. So, instead of 6.5G of swap space, I actually have 24G (more than sufficient). (I used a 128k chunk because swap shouldn't be in large chunks.)
For partition 3, this is the root partition. This is where most of your OS (but not personal files) will sit. I decided that raid 10 was ideal. However, I didn't necessarily need massive chunks as I would with the home partition later. I also wanted it to be setup in 'near' mode (as mentioned in my previous advanced raid configuration post) So, I set it up like this:
# mdadm -C /dev/md2 -l 10 -n 4 -p n2 --bitmap=internal --chunk=1024k -e 0.90 /dev/sd[a-d]3
256k-512k is the sweet spot if you're using smaller files. On my system, I use a mix of small and large files in the root partition. Smaller chunks are better for that partition than the home partition, due to the large amount of smaller files I access frequently. Thus, the home partition has a larger chunk size.
That 250G on the 4 drives in raid 10 config became 500G for the root partition.
Note: If you're using Redhat derivatives, make certain that you use ext4 (for newer releases) or ext3 (for older releases) on your root partition as the filesystem. See the above post regarding filesystems and the "optimal" ones.
For partition 4, this is the home partition. This is where most of your personal files will sit. I decided that raid 10 was also ideal here. However, unlike the root partition, I would have significantly larger files here. The .hogg files in this game are particularly huge and I use them frequently. Unlike the root partition, I wanted this one in 'far' mode for raid 10.
# mdadm -C /dev/md3 -l 10 -n 4 -p f2 --bitmap=internal --chunk=2048k -e 0.90 /dev/sd[a-d]5
Note, for all of these, I also used "--assume-clean" so that it wouldn't attempt to rebuild the array after I created it. Whether or not you use it is up to you. The 'bitmap' flag is important too (except in the case of md1 - the swap partition. You don't care if data there is permanent). What this says is "If you need to rebuild, use the stopping point kept in the metadata" to start your rebuild from when you stop and restart the raid array again. This way, you don't have to rebuild the array from the start every time you stop it (for example, during a rebuild).
That 0.7T partition became 1.4T for the home partition in raid 10.
Note: You may use whatever filesystem is supported by your distribution for this. Either ext4 or xfs are ideal candidates..
As you can see from the configurations I made, I chose different optimizations and layouts depending on use. This allows me to use each partition at it's optimal configuration without sacrificing performance for the uses they provide.
So, this will cover a few useful and, to a greater extent, lesser known utilities. (If you're an expert Linux user, you already know this so I don't mean you.)
Note, as with all of my instructions, whether or not a package or command is available for your distribution depends entirely on the distribution and/or whether or not you installed it. Research may be necessary on your part to determine what package, if any, is available for your particular distribution to obtain access to the commands.
For example, as root (you may need to run this by prefixing the command with 'sudo' depending on your distribution): # lsof | grep bash
That will give you a list of files either opened by a bash process and/or any processes matching the word 'bash'.
Use in STO's case: Determine if a program, like wine, has files open even though the program may not be on your screen. Example: Wine crashes but you can't start it up and don't see any error messages. Example #2: Wine seemingly stalls during the installation process of STO but is in fact running normally (during the directx phase of the install is a prime target).
You can also use grep on a file instead of grabbing data passed to it through stdin (often referred to in longer form as 'standard in'): # grep root /etc/passwd
Use in STO's case: Filter through the results of lsof, as listed above, to make it easier to search for open files.
Example usage: # watch "grep MHz /proc/cpuinfo"
Use in STO's case: Use it to watch the files being opened, written to, and otherwise used during the installation process. (Not the only usage mind you, but it's a good one.) Used in combination with the above lsof and grep commands, this is particularly useful.
Example usage (which may or may not require root depending on your distribution): # dmesg
In this example, I am only including a small snippet of dmesg on my system (as it's otherwise quite large):
[ 0.000000] Linux version 2.6.36-ck-r5 (root@unimatrix-01) (gcc version 4.4.5 (Gentoo 4.4.5 p1.2, pie-0.4.5) ) #11 SMP Mon Mar 14 21:04:30 PDT 2011
[ 0.000000] Command line: root=/dev/ram0 init=/linuxrc ramdisk=8192 real_root=/dev/md2 amd_iommu=on iommu=memaper=3 hpet=disable
[ 0.000000] BIOS-provided physical RAM map:
Example usage: # cat /proc/cpuinfo
I'm only including a part of one of my 8 cpus just for viewing so you know what to look for:
processor : 7
vendor_id : AuthenticAMD
cpu family : 21
model : 1
model name : AMD FX(tm)-8150 Eight-Core Processor
stepping : 2
microcode : 0x600063d
cpu MHz : 1400.000
cache size : 2048 KB
physical id : 0
siblings : 8
core id : 7
cpu cores : 4
apicid : 23
initial apicid : 7
fpu : yes
fpu_exception : yes
cpuid level : 13
wp : yes
flags : fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm constant_tsc rep_good nopl nonstop_tsc extd_apicid aperfmperf pni pclmulqdq monitor ssse3 cx16 sse4_1 sse4_2 popcnt aes xsave avx lahf_lm cmp_legacy svm extapic cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw ibs xop skinit wdt lwp fma4 nodeid_msr topoext perfctr_core perfctr_nb arat cpb hw_pstate npt lbrv svm_lock nrip_save tsc_scale vmcb_clean flushbyasid decodeassists pausefilter pfthreshold
bogomips : 7224.48
TLB size : 1536 4K pages
clflush size : 64
cache_alignment : 64
address sizes : 48 bits physical, 48 bits virtual
power management: ts ttp tm 100mhzsteps hwpstate cpb
At the top, processor 7 means as such: All system CPUs start at 0 and count upward. So, consider 0 actually the 'first CPU'. In this case, 7 means the 8th CPU in my system.
Note: If you get values higher than the refresh rate of your monitor, it does not actually mean you will get framerates that high. In fact, it means nothing useful to you at all. You'll want the framerate to match that of your monitor. In my case, that is 60FPS as my High Def LCD TV does 60FPS (sorry, even if it reported 15,000 or 1200, you'd never see that on ANY device in the world).
# glxgears
janeway@mars ~ $ glxgears
Running synchronized to the vertical refresh. The framerate should be
approximately the same as the monitor refresh rate.
298 frames in 5.0 seconds = 59.428 FPS
301 frames in 5.0 seconds = 60.001 FPS
300 frames in 5.0 seconds = 59.995 FPS
301 frames in 5.0 seconds = 60.002 FPS
301 frames in 5.0 seconds = 60.006 FPS
300 frames in 5.0 seconds = 59.998 FPS
^C
Looks good so far.
As I mention in the video section, your CPU speed does have an impact on your framerate in OpenGL. glxgears is OpenGL - NOT DirectX so it is CPU bound and single threaded (which is to say it only runs on one processor).
janeway@mars ~ $ glxinfo | grep renderer
OpenGL renderer string: GeForce GTX 680/PCIe/SSE2
Taking a snippet out from the output:
Handle 0x0002, DMI type 2, 15 bytes
Base Board Information
Manufacturer: ASUSTeK COMPUTER INC.
Product Name: M5A99FX PRO R2.0
That is the correct board.
The default configuration shows total CPU usage, disk reads/writes in aggregate, network send and receives in aggregate, swap file paging (in and out), as well as system interrupt and waits. This is particularly useful if you're trying to determine what, if anything, is going on. A particularly good usage of this command is during the STO install (again during the DirectX portion of the install) where you're not sure what's going on because it seemingly stalls but hasn't.
Example: # dstat
You can also sort by any of the columns - using the "?" question mark key for help on the syntax and keystrokes needed to do this.
Note, some distributions allow you to save top settings. Though, don't necessarily count on it. Not all do.
Like top, it can sort based on preference. Though, you don't need to fish through commands to do this. Just click on the column headers and it will sort accordingly. Like top, you can also kill and renice processes through this utility.