This page is only really relevant to advanced users who are trying to seqeeze more performance out of their 4store setups. Out of the box, without any tuning 4store should be pretty fast, but there are things you can do to get more out of your hardware.
By default the linux kernel is tuned to push mapped blocks out to disk much more quickly than is ideal for 4store. You can add the following to /etc/sysctl.conf:
vm.flush_mmap_pages = 0 vm.dirty_ratio = 90 vm.dirty_writeback_centisecs = 60000
After making this change you have to reboot for them to take effect.
These changes are appropriate for CentOS/RHEL 5 systems, and other kernels of the same vintage (2.6.18). Newer kernels are more efficient, but we don't have appropriate sysctl settings for newer kernels yet. If you have some please contribute them.
Currently 4store only supports numbers of segments that are powers-of-two. This limitation is mostly historical, and could be removed in a future version, but would require extensive testing.
The main user-tunable value in 4store is the choice of number of segments. We recommend a maximum of one per core in the storage nodes. Do not include hyperthreads in your core count.
For example if you have a cluster with four storage nodes, each with four cores, then the maximum recommended number of segments would be sixteen.
However, it's generally advisible not to go for more than four segments per node, as the IO susbsystem is unlikely to be able to keep up. With current machines with 8, 16, or more cores irt's best to say at around half the number of cores, unless the node has very fast storage.
It's often worth experiementing with this value to get the best performance, as it can make a radical difference, and the balance between import and query performance can vary a lot with the types of queries, and data.
Segmentation is set by the 4s-backend-setup or 4s-cluster-create commands. The default for single machine setups is two segments, and the default for clusters is thirtytwo.
$ 4s-backend-setup KB --segments 4
$ 4s-cluster-create KB --segments 16
4store is primarily designed to be run with all it's indexes in RAM, however if that's impractical then we strongly advise you to use SSDs, or at least RAID volumes of 15k SAS disks. Note, that SSDs are much more cost efficient relative to the performance of fast disks.
If you're using disks then RAID 0, 10, 5, or 6 is appropriate for the index storage.
The indexes are kept in /var/lib/4store/ so mounting as RAID valume as that directory, or symlinking it onto a RAID volume will work fine.
You can also stripe the data as per SSDs, below. We don't current;y know how much performance advantage there is over RAID volumes.
If you're using SSDs then it's far better to use 4store's internal sotrage striping. Unforunately there's no tools to make this easy, but within the /var/lib/4store/ directory there's a sub directory for each KB, and directories within that for each segment, named like 0002. Symlinking these segment directories to separate volumes will increase the performance much more than using RAID over SSDs.
Given a machine, "node0", running a KB "kb" with two SSDs, mounted on /srv/ssd0 and /src/ssd1, and four segments per node, this can be done as follows.
N.B. the cluster/store must be stopped with 4s-cluster-stop kb or 4s-backend-stop kb as appropriate.
$ ssh node0 $ cd /var/lib/4store/kb $ mkdir /srv/ssd0/kb $ mv 0000 /srv/ssd0/kb/ $ ln -s /srv/ssd0/kb/0000 . $ mkdir /srv/ssd1/kb $ mv 0003 /srv/ssd1/kb/ $ ln -s /srv/ssd1/kb/0003 .
Note that the ID numbers of the segment will depend on the number of nodes, segments, and whether replication is enabled.
Enabling replication will reduce the import performance, but will not have a significant affect on query performance. For situations where speed is ctricial it's better to replicate the entire cluster, and load balance between them.