System Administration
Air-Gapped Environment with an Ocient System
an air gap is a security measure that physically isolates a computer or network from any other network, including the internet this isolation removes wired or wireless connections between the isolated system and any external and potentially unsecured networks organizations that handle highly sensitive information, such as government agencies, financial institutions, and critical infrastructure operators, use air gapped environments to protect against various security incidents {{ocient}} supports using an ocient system in an air gapped environment without any special configuration here, you can find the general workflow for using an ocient system in an air gapped environment and the required installation steps work with an ocient system in an air gapped environment this workflow shows the high level steps to set up an ocient system in an air gapped environment ensure that your drive firmware and operating system is up to date install and configure an ocient system in a connected environment where you can download the required files from the internet disassemble the cluster pack and deliver the hardware to your secured location reassign a new ip address reassign a new hostname bootstrap the system again install an ocient system in an air gapped environment follow these steps to install the ocient system in an air gapped environment these steps assume that you have met the system requirements for installation bootstrap the first node bootstrap the node connect to the initial sql node with the username and password of your server and the ip address of your node this example connects as the administrator admin to the ip address 10 10 10 10 ssh admin\@10 10 10 10 use your preferred text editor with sudo to create the /var/opt/ocient/bootstrap conf file as root with these contents /var/opt/ocient/bootstrap conf example initialsystem true start the database sudo systemctl start rolehostd verify that the node and the service is active by executing this status command systemctl status rolehostd rolehostd service rolehostd daemon startup loaded loaded (/etc/systemd/system/rolehostd service; enabled; vendor preset enabled) active active (running) since wed 2022 01 26 23 31 36 utc; 7s ago if the rolehostd service is running, you can also check the ocient logs on your node search and ensure there are no \[error] log messages tail f /var/opt/ocient/log/rolehostd log verify connection to the sql node at this point, you have a running database with a single node you should be able to connect to the database using jdbc or pyocient and execute commands every new system starts with a system database to connect to a new system, use the username and password configured in the bootstrap conf file or the username admin\@system and password admin if none was provided for example, assume your node named sql0 has an ip address of 10 10 0 1 use the jdbc driver cli to connect with this connection string connect to jdbc\ ocient //10 10 0 1 4050/system user "myuser\@system" using "mypassword"; to see the roles running on the single node, execute this query select name, operational status, software version, array agg(service role type) from sys node status as ns left join sys nodes as n on ns node id = id left join sys service roles as sr on sr node id = n id group by name, operational status,software version; the initial node should be listed as running the sql , admin , health , and operatorvm roles if all of these roles are present and the node is active, you can proceed to the next step to bootstrap the remaining nodes bootstrap the remaining nodes performing the bootstrapping process on the remaining nodes is identical on all nodes the remaining nodes can be bootstrapped in any order on each node, log in using ssh and use your text editor with sudo to create the file /var/opt/ocient/bootstrap conf that contains this text by replacing \<first node ip address> with the ip address of the initial node you created in step 1 /var/opt/ocient/bootstrap conf example adminhost \<first node address> \<first node ip address> is the dns name or ip address of the initial node you can obtain the ip address of the initial node by executing ifconfig on that node if the password for the system administrator has changed, set the correct username adminusername and password adminpassword in the bootstrap configuration file bootstrap conf on each node, start the database sudo systemctl start rolehostd when you replace foundation nodes, the ocient system removes the prior node after the creation of the new node some queries of the system catalog tables might not return results until the prior node is removed at this point, all the remaining nodes are not configured with any roles after all nodes have been started, you should see them when you execute this query with only the health role listed this query uses the sys node status , sys nodes , and sys service roles system catalog tables to retrieve node information for the node name, operation status, version, and all service role types the query uses the array agg function to retrieve the service role type for all rows select name, operational status, software version, array agg(service role type) from sys node status as ns left join sys nodes as n on ns node id = id left join sys service roles as sr on sr node id = n id group by name, operational status,software version; remove a cluster in an air gapped environment use this information and workflow to remove a cluster by shutting down all ocient processes and erasing the drives and data the system administrator must download the sedutil cli utility used in this workflow you can classify the drives in an ocient system in two categories based on their usage operating system (os) drives these drives contain the installation of the operating system and software, including the {{ocienthyperscaledatawarehouse}} (ohdw) a system might have a single physical os drive or an os installed on a raid disk created by using more than one drive the os drives on nodes with the administrator role (metadata and possibly sql nodes) also store configuration information related to the ohdw that can include node names node ip addresses user data mapping information for compressed columns encryption keys for the data drives (unless the keys are under the control of an external key management system) data drives the data drives are present in all types of nodes except the node running only the administrator role on the foundation node, the data drives store tables on sql nodes, the data drives store transient query information and, on loader nodes, the data drives store transient loading information the data drives in the ohdw are exclusively nvme drives the os drives can be nvme or ssd these drives can support the computing group (tcg) opal specification or not support it the type of drive (opal supported or not) determines how the system removes all data on the drive follow these steps to remove a cluster for opal supported drives to remove data for opal supported drives, follow these steps these steps irreversibly remove the data follow these steps after you ensure that you no longer need the applicable data from the system stop the ocient processes running on the nodes using these commands for loader nodes, use this command the command combines two commands that stop all processes related to loading sudo systemctl disable lat && sudo systemctl stop lat for all types of nodes, use this command the command combines two commands where the first command stops processing on all nodes and the second command stops the main ocient system process sudo systemctl disable rolehostd && sudo systemctl kill s sigkill rolehostd bind the data drives to an nvme driver so that the drives become visible to the os sudo /opt/ocient/scripts/nvme driver util sh bind nvme run this command for each data drive the impacted drives are the drives displayed as their association change from uio drivers to nvme drivers you can see the nvme drive association by running the /opt/ocient/scripts/nvme driver util sh script all data drives are erased you can move drives to another node with the sedutil cli utility if the nodes are already powered off and you cannot power them on use the same command to erase the drive sedutil cli n reverttper admin /dev/nvmexn1 sedutil cli n reverttper admin /dev/nvmexn1 you cannot securely erase the os drives while the os is running you can either use the secure erase facility from bios or move drives to an external host with a utility approved by your organization for the remove operation related links docid\ clmx7aipvis6ctybuagzx docid\ xszfsrrbjlndjdlsovc74 docid\ xvtkrztdmrefytafyehjv