Hardware / Maximums

Design

RSS’s cooperative system design, necessary to provide normal operations when drives and whole servers fail, allows ‘performance greater than the sum of its parts’ — particularly when appreciating cost of ownership. Why? Because a collection of last-year’s top-of-the-line servers deliver more performance at less cost than one of this year’s top-of-the-line servers.

For extra cost-sensitivity, even older used servers, taken in an RSS group, deliver more performance than a group of 50 people all working at the same time might need.

There are many combinations of possible loads, from heavy file use, to web traffic, to database updates, to multimedia heavy traffic, to real-time updates, to library sort of use — very large storage of infrequently accessed material. So, scoping / sizing a system is more an art than a science, but given RSS’s cost structure and expansion ease, getting it close then adjusting based on experience is reasonable.

How well does it hold up over time? One system is going on 8 years, another is at 6 — and counting.

  • Space Requirements / Location, Wifi / Wiring Considerations: The space requirements vary considerably with the purpose of the installation at each of the organization’s locations.
    • A typical ‘main location’ supporting the needs of up to several dozen mostly local staff actively using system resources at the same time, and not supporting hundreds of simultaneous website sales/chat/bloggers, might reserve about the same space as a typical home dishwasher (24U rack cabinet on wheels), located as follows:
      • Important: At least 16″ of open airflow on the front and rear faces at all times.
      • Important: In a climate controlled area, or the coolest available alternative that is always ten or more degrees above freezing and ‘non-condensing’ (no water streaks on walls or surfaces). For example: A slightly heated garage, a corner of a warehouse, a small closet with interior HVAC vent and either cold air return (or gap under the door) is ok. But, a small closet with a solid door and without HVAC vents inside moving air is not a good idea. The cooler the ambient temperature, the longer the time between repairs.
      • Important: Allow for steady fan noise. The systems deploy several small internal fans, which speed up the warmer the environment gets. The higher the speed, the louder the fan. More expensive cabinets do better at absorbing fan noise– but we advise not to locate any full time staff member’s desk within several yards without extra steps to moderate ambient noise or auditory protection. In a hot environment with fans running, full noise peaks might be similar to city traffic. Without extra noise moderating cabinetry and server choices, in a climate controlled room, idle fan noise might be comparable to a normal conversation or a window unit air-conditioner.
      • Important: Cable planning to allow two yards/meters or so of occasional ‘rolling room’ — so service and cleaning staff can get underneath and behind when repairs are necessary– without disrupting operations.
      • Desirable: Two power outlets within reasonable distance of the cabinet, each connected to different panel circuit breakers, neither of which also connects to any heavy motorized unit (such as an air conditioner, industrial equipment, oven, garage door motor, etc.) This allows operations to continue, if in a degraded way, when one circuit has a problem.
      • Desirable: Close to internet service provider’s client-side devices (cable modems, fiber-optic-to-ethernet converters, etc). This allows the ISP’s converters to get ‘clean power’ from the RSS cabinet source, and minimizes cable management issues.
      • Desirable: Physical access control. Anyone with physical access and enough time can compromise any security precautions. Remember to make provision for service staff to have access.
      • Worth thinking about: There are two battery backed-up power regulation devices in each RSS cabinet. These should be thought of as very short term ‘good operations’ support devices, as much to smooth out public utility power, attempt to ‘take the hit first’ if lightning strikes, but not as a means to keep operations going longer than several minutes, less than 30. If your power supply is more unstable than your business requires, consider having some manner of backup generator available. The UPS will power smooth operations long enough to make alternative power connections.
      • There are several ‘whole building’ lightning/surge protectors on the market, they are installed by professionals at the switch panel nearest the building’s power entry point and are a very good idea– very inexpensive insurance for installations in lightning prone areas or any industrial location with welding or large amperage motors. If the RSS system is powered at a sub-panel that has such devices on that sub-panel, a second surge protector there is advised.
      • Worth thinking about: Security cameras including the server cabinet (capturing the faces of any touching the keyboard/screen often placed atop it) are a very effective deterrent.
      • Worth thinking about: While it is possible to require all users to connect via their own internet services, and so only cabinet power and internet cables are necessary to provide access strictly speaking– nearly all installations will want to offer at least wifi local access, or both wired and wifi access.
        • Protected / Staff WIFI: Within the cabinet are two managed switches, each providing open ports for staff LAN traffic. Both are equivalent, and working at all times: but it is the nature of Ethernet switches that if the switch fails, everything connected to it also is cut off unless downstream connected devices provides their own secondary means of connection. With that, plan to run cables to wifi access points from both switches and arrange the wifi access points with sufficient overlap so if a switch fails, the other half can provide service even if at a slower speed.
          • While it is possible to ‘daisy chain’ wifi access points, and if load is light that’s often a cost-effective choice– it’s a better idea to have a ‘home run’: a cable from each access point to a managed switch.
        • Local LAN/Wired Access: “Cat-6” cable is by far the fastest, safest and most reliable option, better but less cost-effective is fiber-optic. Careful choice of server location can dramatically reduce costs and improve reliability: minimize cable run lengths to client desktops, and wifi access points. The more direct connections to the switches in the cabinet, the fewer the ‘downstream’ unmanaged switches, the shorter the cable runs: more reliable the overall operation over time. Within the RSS cabinet are two managed switches, each providing open ports for LAN traffic. They are equivalent, and operate independently, should one fail the other can still work.
          • There should be no ‘cross connections’ among devices or further unmanaged switches plugged into either of the two managed switches. Think of a simple tree and branches. More sophisticated designs are possible, but requires expertise — fair warning, very bad things will happen if unmanaged cross connections exist.
          • It is a judgement call when more longer wired connections to the cabinet are a better choice than one long run to an unmanaged switch nearer the client devices, or an unmanaged switch near the RSS cabinet. Generally favor more direct runs for access points.
            • Service times are reduced, often by a whole day, when ‘all the switches are in one place’– at a cost of greater cable lengths. Particularly the case when adding switches that power access points or telephone desk-sets over the Ethernet cable.
          • However, it is in the nature of wired ethernet that should a switch fail or the cable break, unless the connected ‘downstream’ device has a backup it will lose connectivity. A reasonable choice is:
            • Install a mostly unused simple switche to each managed switch, with enough open ports that when something fails there’s enough open spots to move the connections. Unless your site has taken extra steps: RSS does not care which switch ports which client connects to.
            • And/Or: provide backup staff wifi access and instruct those with cable/connectivity issues to use wifi while the wired options are being fixed. NOTE: Be sure to advise staff to entirely turn off wifi after repairs are made or speed will suffer.
      • Worth thinking about: Guest / Less protected Internet Access Only Wired/WIFI. Each server has an Ethernet port dedicated to guest/public access. It’s entirely usual for security sensitive installations to not offer guest wifi and to leave this port disconnected. However, if any manner of guest wifi is necessary, it is well worth the effort to use these server ports and to avoid any solution that offers guest access via any use of any other RSS port whatsoever whether directly or indirectly. This means: avoid a further ‘internet router’ plugged into a LAN port thinking that the ‘lan’ ports on such afford protection against LAN access — they do a little, but not entirely. The ethernet port on each server dedicated to guest access is always live, plug-in and go.
        • The public access port on an RSS server can be a quick way for diagnostic/service staff to ‘get online’.
        • Though it costs more to buy further access points for guest wifi, it is a much better security idea to go to the effort of using these guest ports on the RSS servers compared to using ‘guest’ SSID facilities offered by many wifi access point vendors. Guest SSID access is better security than giving all access to staff wifi, but not even close to as good a security idea as plugging separate access points into the public ports of the RSS servers.
        • If you wish to provide ‘coffee house’ or ‘guest’ access — do so connecting wifi access points with SSID named identifying guest status, and further Ethernet switches to these ports: but DO NOT EVER cross connect anything else from an RSS system to them, either directly or indirectly lest security be cancelled entirely.
        • For those whose guest/public needs exceed staff/local/protected needs, consider adding a second pair of managed switches to the cabinet and connecting them to the public access ports in the same fashion as the protected lan is connected to one of the two server LAN ports. This will provide all the failover support and multi-isp redundancy, while protecting inside systems.
        • For those with very extensive guest/public needs, or different ‘levels’ of public/guest access, all of which that requires failover on those systems as well: consider a dedicated install of the ‘communications point’ version of RSS setup (mostly the same as this except for much less storage space and support) — plug its ISP ports into the public NICs on a couple ‘main servers’ converting its protected LAN ports into public ports except with all the failover functionality.
    • RSS “Communications Only”/Secondary Location Servers: In all ways logically equivalent to main location setups, physically they are about half the size or less, only because none of the servers need space for more than 2 storage devices per server. Whether rack or cabinet mounted depends on whether UPS devices are included. If without regard to RSS there would be no productive work at the location should the power fail, and if the only function is to link a site logically to larger installations, (no hosting websites, no large-scale local file storage, just secure internet-dependent ‘seems like in one building but really elsewhere’ functions) the system can be as quiet as a typical desktop and fit on a well ventilated shelf. It can still make use of multiple ISPs and everything else documented here, though most traffic will route either directly from client devices to the public internet except when secure access is needed to main location resource (which automatically use a site-suite VPN).
    • RSS Large Scale Locations: In all ways logically equivalent to a typical setup, these can scale to as many full-height racks of dozens of servers acting as virtual hosts or dedicated to database, web-hosting, internet connectivity or dedicated purpose devices (pbx, git, etc) the application might call for. See the Capabilities tab for details of the logical RSS design ‘maximums’.
  • Server Requirements / General: Typically, an RSS installation will have a few smaller servers which together cost less than one high performance server. Because RSS was designed and is tested using older speed and space constrained servers, it is highly efficient — it delivers every bit of the capability the hardware offers, little is lost to overhead and ‘bloatware’. Server performance / cost is so great, clients will have ample time to add capacity when response time starts to lag under increasing staff count/usage.
    • As RSS is designed to expect and cope with failed hardware, it is available to cost-constrained buyers since re-purposing used / older model servers makes sense. 5 used servers deliver more performance than a single new model, and that’s with the 5th being a pre-installed spare, doing little more than being ready to replace a failure in the others. When one of the main four fails, all that’s necessary is a ‘no tools’ swapping of some storage devices, perhaps changing a hostname, and rebooting. Up and running– all that’s left is to await the arrival of the ‘replacement spare’.
    • If the location requires a custom ‘vm’ having access to special purpose / one-of hardware interfaces, confirm the related server has the VT-d bus device mapping virtual capability. Legacy label printers, etc often call for this.
    • An example reasonable used server might be the Supermicro X9DRi-F 2U 8 Bay Server with 2x E5-2680v2 2.8GHz 20-Cores, 64GB with rails. As of 4/2022 used for $820/each. A single new high performance server, by comparison, costs more than 10x that price.
  • Server Requirements / Minimum:
    • Memory: To maintain efficiency discipline: though RSS is stress tested and developed using servers with 32GB memory, 48GB should be considered a bare minimum, 64GB or more delivers better performance over time, and for the added cost 128GB pays off in storage access / caching performance boosts. More than that makes sense if hosting several high-traffic websites. Only ECC memory is supported.
    • CPUs: Though rss is stress tested and developed using servers with 8 total processing cores / server, running at most around 2.6Ghz: more cores are better. For example an old E5-2670 v2 running at 2.5Ghz has 10 cores, costs less than $50, and most server motherboards allow two. Avoid any clock speeds below 2.3Ghz. Ensuring the processor has AES acceleration hardware is strongly recommended. Beyond that, pay attention to annual energy use.
    • Storage Bays/ Drives: If deployed as part of a file store location, at least 6 drive bays, no upper limit. Otherwise, 3. Whether solid state, sata, sas, 3.5″ or 2.5″, depends on the marketplace at acquisition time.
      • Every RSS system reserves 2 drives, maintained as mirrors of one another, for system operations. 7200 rpm 1TB spindle drives are sufficient, solid state drives improve performance and reduce size. “Hot Swap” capability is a convenience, but not required.
      • Systems intended as live file stores should populate the remaining drive bays with drives near to the same size. More drives delivers better performance. For size reference: RSS never stores fewer than three copies of client data. As of this writing (2nd Q 2022), the ‘price performance sweet spot’ is 7200rpm 4tb spindle drives. Drive speeds less than 7200rpm or less than 1TB are not advised.
      • Systems intended as communications centers, not live file storage or database hosts, can have ‘almost anything’ in the third drive slot, but not less than 1TB, either solid state or 7200rpm.
      • Remember, 4 servers in file store locations are populated with 6 storage devices– the backup/spare needs the space to hold all, but need have only the system 2 installed.
    • Ethernet Connections: Significant differences exist regarding the type and number of Ethernet ports, depending on whether file store vs comm center use. Because of the point-to-point nature of Ethernet wires, Ethernet switches are a notoriously difficult to detect single point of failure and as such are avoided where possible. This increases the port count on a server, but we feel decreases the overall cost of ownership and increases performance. Short version: never less than 2 1GB Ethernet ports, potentially as many as 10 not less than 1GBe, with 6 having 10GBe capability. As follows:
      • For ‘File Store’ and optionally ‘Comm Center’ systems: a 4 nic add-in card, preferably 10GBe but not less than 1GBe. Each of the first five servers involved in a file store has a dedicated hard wired cable to the other (4 are active, 1 is the standby spare). These carry secure traffic only and provide the backbone of the high-availability file storage system.
      • Only for File Store capable systems, optional: a further single preferably 10GBe nic, but not less than 1GBe. For typical locations this is used for expansion to servers 6 and up, and for diagnostic connections. For larger file store server count locations, each file store server is directly connected from 2 to 5 others, arranged so no server is more than one ‘hop’ from any other– this provides linkage to file store servers 6 and up. The servers themselves perform switch functions as necessary, the multiple pathways allow survival when a cable fails, or a whole system fails.
        • Though not recommended, it is possible to add the complexity of switches and so reduce the cable and port count. Generally we feel cables and ports are less or similarly costly and deliver better performance.
      • Comm Center systems not choosing the 4-port secure nics: 1 port, not less than 1GbE, dedicated to secure inter-server traffic, connected to a dedicated un-managed switch plugged into a different power circuit than the LAN ports (described below).
      • If connecting the server to an ISP: 1 port of a speed matching the peak ISP capability. 1Gbe is typical. Max 1 ISP/server.
      • If providing guest/public access: 1 port, not less than 1GbE. 2 if ultra-high availability for guest/public is essential (very rare, slightly customized system).
      • LAN traffic: 2 ports, not less than 1GbE, 10GbE preferred. Each of these is wired to a different managed switch (the managed switches are inter-connected as well). If willing to allow an unmanaged switch to be a single point of failure, 1 port.
      • Optional: KVM/ILO/Management port. Most servers designate an ethernet port for remote management. While not strictly necessary for RSS operations, these can reduce time-to-repair and time-to-detect problems.
    • Misc “Server Goodies”: All optional, to include temp sensitive fan speed / noise, front panel USB and/or status lamps, rack rails and cable lengths sufficient for ‘pull forward’ repair, cabinet top keyboard, mouse, monitor and ‘kvm’ switch to directly attach to each server, serial ports / parallel ports / special purpose devices such as wifi, bluetooth, etc.
  • Design Maximums: What specific capabilities each RSS client location installation actually offers is determined almost entirely by the hardware structural choices, the many details of which are under the Structure tab above. All RSS functions are supported at every client location, whether and to what extent each finds use depends on the client’s purpose. With that, every location of each client has the same ‘design maximum capabilities’ detailed below. Clients are free to use some, all or none of the provided capabilities.
    • It is a design feature that capabilities of no interest to a client can be ignored– capabilities of no use do not impose management burdens or security risks. Everything in RSS is auto-configured in a default operating way, detailed under the menu tabs on this website.
    • Any number of client specific extensions are possible, the intention is for RSS to be a platform clients may extend with their particular expertise without having to learn and manage ‘everything else’.
    • Design Limits:
      • Up to 16 locations per client can appear to be ‘in the same building’ (client groups are called ‘squads’ in RSS). Up to roughly 2000 client groups can be ganged together as members of ‘echelons’.
      • At each location, up to 7 internet service providers can be hardwired to bare-metal virtual hosts (machines that also provide further capabilities).
      • At each location, up to 9 further internet service providers can be added on dedicated small bare-metal hosts. Usually these are configured to use cellular wifi hotpots (even mobile phone hotspots) as backup internet service providers. However, that’s not a requirement, physical connections are allowed. The RSS security design requires isps to have direct hardwired connection to a bare-metal host.
      • At each location, 4 to 7 virtual machine hosts also serve as high availability file/block/object servers, while 0 to 62 further bare-metal servers (ceph-osd hosts) can be installed to increase file service capacity. The number of storage device slots per server times the size of the drives installed dictate raw capacity, though RSS policy is that all important data must be stored on no less than three different servers. A typical RSS noc host will have 6 installed storage devices, as follows:
        • two paired and isolated for system software storage and operations (1TB usually each, but up to 18TB if local high speed data operations exceed customary demand), and
        • four of all the same or near to the same size, supporting general purpose database operations and file service, block device and object storage operations. These can be any size, very small if general purpose storage is not of use at the location.
      • At each location, there can be either 0, or between 4 to 59 world-wide-webservers, of whatever CPU core count/speed and memory capacity. Load is distributed among locations and working servers. Servers 1 through 7 are assigned to virtual machines, one each on the up to seven vm hosts (called ‘nocs’ or ‘network operations centers’ internally). Networking security designs are in place which are not documented publicly, except to note they do not hinder performance in a materially measurable way.
      • At each location, there can be between 4 and 59 database servers whose data storage is restricted to storage available at the particular site, though the servers as a group can be referenced by any location. Called ‘dbl’ or database local’ internally. The first 7 servers must be hosted as virtual machines each one per noc, the rest can be bare-metal. For reasons of speed, the data is stored fewer times per host and so is limited to the size of one disk drive (500GB to 18TB)/location. Networking security designs are in place which are not documented publicly, except to note they do not hinder performance in a materially measurable way.
      • At each location, there can be between 0 and 59 database servers that replicate changes immediately across locations (if any). By default, the data storage is limited to about one third of the sum total capacity of all the drives per server, perhaps 60TB. However a configuration change can distribute the data across the storage cluster at a location, so realistically assuming 18TB drives at 10 bays/server, approximately 3,700TB/(database server count). Networking security designs are in place which are not documented publicly, except to note they do not hinder performance in a materially measurable way.
      • At each location, while technically there can be 0 to 59 email servers, in all practicality two per client at one main location provide all functions. Higher demands should be met by using higher-performance bare-metal servers as email hosts, but little virtual machines running on two different nocs meet the needs of thousands of connected users and suffice. So long as one is working, email is up.
      • At each location, all network related routing, proxying and failover activity is managed on one of two ‘gate’ servers running ‘live’/’hot backup’ on different hosts. In all but the largest installations, it suffices to have these running as virtual machines on different noc servers. Baremetal installs are possible, but it’s usually a much better choice when network throughput is much higher than any one workgorup to scale up the performance of two of the nocs then allocate resources to the gate VMs.
      • At each location, all authentication and authorization and staff/machine/service credentialing operates on two master-master ‘registry’ virtual machines. These machines also handle domain name resolution for the location and usually the public. Like the gate machines, generally it is better to scale up the VM bare metal host then allocate the resources to the registry machines than to run them on bare-metal though that’s certainly an option. Further Alma/Freeipa replicas as masters or slaves to add capacity beyond the thousands this will support is available as a custom enhancement if necessary.
      • At each location, there is one ‘sysmon’ bare-metal device, usually a spare server such as the other nocs but with minimal storage, but could even be the installer’s desktop or laptop. It’s sole function is to ‘break ties’ when the number of other servers is an even number and there is a disagreement 50/50. It also records logs, and generally is meant to provide a physical access point that doesn’t disturb anything for visiting repair folks/admins. If it’s powered off, so long as everything else is ok, nobody will notice.
      • Optional Cloud Routers. RSS’ parent company, Quiet Fountain LLC, maintains a pair of ‘cloud servers’ (housed at this writing in Oregon and Virginia USA). There is almost no custom software running on these, they do not store any client created data, ever. Their purpose is to route fixed public internet addresses to RSS client locations so clients can pick and choose among internet service providers, and to play traffic cop among client locations.
        • Large clients can designate or maintain their own cloud access routers (called ‘rssnocs’ internally, and named according to the latitude and longitude of their physical location.) So long as one router is up, the client locations are connected to one another and the features are enabled that make local services available via the public internet. All that’s required is a pair of capable servers, housed in different buildings, with different static internet addresses.
        • Note that the ability of folks at client locations to browse the net is always available so long as one ISP at that location is up — without regard for or use of rss cloud routers. Only traffic from ‘the public’ to ‘the location’ depends on the operation of the cloud routers, with some narrow technical exceptions (mostly email related).
        • Client information does not ‘cross paths’ because of this: all traffic to/from client locations to these routers is encrypted via the Wireguard technology (see the Internet section of this website). Furthermore: each client uses certificates each creates and manages (though each client starts with custom random defaults assigned by RSS).
        • As RSS client location count and traffic increases, so too will the number of cloud routers.