Storage - Knowledge Base Archives - Hivelocity Hosting https://www.hivelocity.net/kb/category/storage/ Dedicated Servers, Private Cloud & Colocation Tue, 17 Sep 2024 19:12:15 +0000 en-US hourly 1 https://wordpress.org/?v=6.6 Private Cloud Product Guide: VMware by Broadcom https://www.hivelocity.net/kb/private-cloud-product-guide/ Tue, 17 Sep 2024 17:09:08 +0000 https://www.hivelocity.net/?post_type=hv_knowledgebase&p=32095   Private Cloud Introduction Private Cloud (VMware) offers our clients all of the features of a state of the art, multi-tenant VMware cloud environment in combination with Hivelocity’s managed services and infrastructure expertise. Leveraging Hivelocity’s expertise ensures that the your cloud environment is supported by cutting-edge technologies and adheres to industry best practices, guaranteeing optimal …

Private Cloud Product Guide: VMware by Broadcom Read More »

The post Private Cloud Product Guide: VMware by Broadcom appeared first on Hivelocity Hosting.

]]>
 

Private Cloud Introduction

Private Cloud (VMware) offers our clients all of the features of a state of the art, multi-tenant VMware cloud environment in combination with Hivelocity’s managed services and infrastructure expertise. Leveraging Hivelocity’s expertise ensures that the your cloud environment is supported by cutting-edge technologies and adheres to industry best practices, guaranteeing optimal performance and reliability. The infrastructure is engineered to provide high performance, scalability, high availability (no single points of failure), and secure cloud services to our clients, ensuring efficient operations and robust data management.

The core of Private Cloud is powered by VMware’s industry leading virtualization platform as well as Pure Storage. VMware’s robust features provide our clients with flexible resource allocation, seamless migration, and efficient workload management. This ensures a highly responsive and adaptable infrastructure capable of meeting diverse tenant requirements. Pure Storage provides our clients with high-performance, low-latency storage solutions with several performance options based on their application requirements. The integration of Pure Storage ensures that Private Cloud delivers rapid data access and supports demanding workloads, contributing to an optimized and responsive user experience.

To offer flexibility when it comes to management, Hivelocity will be offering two options to our clients. Those options will be broken out into Advanced Managed and Base Managed based to meet each client’s requirements.

Advanced Managed (Multi-Tenant and Dedicated Hosts*)

For clients looking to offload many of the daily management tasks to an MSP, Hivelocity offers a 24x7x365 managed option that includes initial solutioning/design and infrastructure setup for VMware (compute and networking, infrastructure monitoring/alerting, as well as Hivelocity SRE support. This option relieves our clients of having to perform ongoing operational actions and frees them up to do what is important to them, focus on their business applications and running their business.  

 With the Advanced Managed option, clients are provided access to the Cloud Director console to allow visibility of their cloud environment as well as view the usage currently deployed resources. Clients will be provided remote console access to the VM(s) to install and manage their applications. As changes are needed in the cloud environment, clients will be able to simply contact Hivelocity Support via the normal ticketing process to have additional VM’s created, started, stopped, restarted, updated or deleted as well as request changes to the network access and connectivity to and from their VM(s).

Base Managed (Multi-Tenant and Dedicated Hosts*) 

For Clients who are looking for more of a hands-on solution, Hivelocity offers a self service option which allows them to provision and manage their resources within the Private Cloud platform. For this option, the Client retains the responsibility to provision and maintain full control their own VM(s) as well as maintain the patch levels of their operating systems, network configurations and alert remediation specific to their configuration. Hivelocity will be responsible for the initial setup of the account to ensure proper client access as well as the initial infrastructure monitoring setup and monitoring within the Private Cloud— Multitenant Cloud environment. Hivelocity will provide monitoring and remediation services for the infrastructure and will send non-infrastructure related alerts to the client for remediation purposes.

 Clients will have full access to the Cloud Director portal to perform management functions of their Private Cloud — Multitenant Cloud environment. Clients will have the ability to create, modify, start, stop and remove virtual servers, virtual CPUs (vCPU), virtual RAM (vRAM), network, and storage resources. Some advanced configuration requests may require that a ticked be submitted to Hivelocity support to assist with the deployment of the request.

Private Cloud Optional Add On Services:

Advanced Load Balancer

  • VMware NSX Advanced Load Balancer (formerly known as Avi Networks) uses a software-defined architecture that separates the central control plane (Avi Controller) from the distributed data plane (Avi Service Engines). NSX Advanced Load Balancer is 100% REST API based, making it fully automatable and seamless with the CI/CD pipeline for application delivery. With predictive autoscaling NSX Advanced Load Balancer can scale based on elastic application loads across multi-cloud environments, including bare metal servers, virtual machines, and containers.
  • For security, NSX Advanced Load Balancer features an Intelligent Web Application Firewall (iWAF) that covers OWASP CRS protection, support for compliance regulations such as PCI DSS, HIPAA, and GDPR, and signature-based detection. It deploys positive security model and application learning to prevent web application attacks. Additionally, built-in analytics provide actionable insights on performance, end-user interactions and security events in a single dashboard (Avi App Insights) with end-to-end visibility. For container-based microservices applications, NSX Advanced Load Balancer offers a container ingress that provides traffic management, service discovery, and application maps.

Advanced Patch Management

  • Hivelocity will be providing operating system patching to all of the current vendor supported Operating System’s (as detailed here: https://help.automox.com/hc/en-us/articles/5352186282644-Supported-Operating-Systems) on a monthly basis as patches are released. Hivelocity will set up a new account in Automox and provide client access credentials. Once access is established Hivelocity will set up all client VMs that need to receive patching which requires a small agent to be installed on each OS which will require OS access. Once the agents are installed, Hivelocity will set up the initial patching schedule based on client requirements, provide console training, and handover Automox access to the client. Automox will notify our clients when patches are available to provide proper application testing (UAT) and will only deploy patches based on an agreed upon maintenance window as configured in the patching schedule. Clients will have access in the Automox portal to install Emergency/on-demand patches as needed. Any assistance needed can be requested by support ticket.  
  • Automox Powered

Advanced Virtual Gateway Firewall

  • VMware NSX Gateway Firewall is a software-only, layer 2-7 firewall that enables you to achieve consistent network security coverage and unified management for all of your workloads, regardless of whether they’re running on physical servers, in a private or public cloud environment or in containers. When deployed together with the NSX Distributed Firewall, the Gateway Firewall extends its capabilities to deliver consistent protection across the entirety of the infrastructure.
  • VMware NSX Gateway Firewall is a software-only, layer 2-7 firewall that incorporates advanced threat prevention capabilities such as intrusion detection/prevention (IDS/IPS), URL filtering and malware detection (using network sandboxing and other techniques) as well as routing and virtual private networking (VPN) functionality.
  • When the NSX Gateway Firewall is deployed in conjunction with the NSX Distributed Firewall, it’s easy to extend consistent layer 2-7 security controls across all applications and workloads.

Advanced Virtual Distributed Firewall

  • The VMware NSX Distributed Firewall is a software-defined Layer 7 firewall purpose-built to secure multi-cloud traffic across virtualized workloads. It provides stateful firewalling with IDS/IPS, sandboxing, and NTA/NDR— delivered as software and distributed to each host. With complete visibility into applications and flows, the NSX Distributed Firewall delivers superior security with policy automation that’s linked to the workload lifecycle. Unlike traditional firewalls that require network redesign and traffic hair-pinning, the NSX Distributed Firewall distributes the firewalling to each host, radically simplifying the security architecture. This allows security teams to easily segment the network, stop the lateral movement of attacks, and automate policy in a vastly simpler operational model.

Advanced Firewall with Advanced Threat Protection

  • VMware’s NSX Advanced Threat Prevention (ATP) provides network security capabilities that protect organizations against advanced threats. NSX ATP combines multiple detection technologies – Intrusion Detection/Prevention System (IDS/IPS), Network Sandboxing, and Network Traffic Analysis (NTA) – with aggregation, correlation, and context engines from Network Detection and Response (NDR). These capabilities complement each other to provide a cohesive defensive layer. As a result, ATP increases detection fidelity, reduces false positives, and accelerates remediation while decreasing security analysts’ manual work.
  • IDS/IPS: This technology inspects all traffic that enters or leaves the network, detecting and preventing known threats from gaining access to the network, critical systems, and data. IDS/IPS looks for known malicious traffic patterns to hunt for attacks in the traffic flow. When it finds such attacks, it generates alerts for use by security analysts. Alerts are also logged for post-incident investigation.
  • Network Sandbox: This is a secure isolation environment that detects malicious artifacts. It analyzes the behavior of objects, such as files and URLs, to determine if they are benign or malicious. Because it does not rely on signatures, the sandbox can detect novel and highly targeted malware that has never been seen before.
  • NTA: This technology looks at network traffic and traffic flow records using machine learning (ML) algorithms and advanced statistical techniques to develop a baseline of everyday activities. NTA can identify protocol, traffic, and host anomalies as they appear. Of course, not all anomalies represent threats; that’s why VMware’s NTA implements additional ML and rule-based techniques to determine if the anomaly is malicious. This analysis pipeline keeps false positives to a minimum, reducing the security team’s work so the team can focus on real issues.
  • NTA : Utilizes machine learning algorithms to develop a secure baseline of activities from network traffic, log files, and flow records and then alert to deviations from the secure baseline.
  • NDR: NDR consists of aggregation, correlation, and context engines. The aggregation engine collects signals from individual detection technologies. It combines them to reach a verdict (malicious or benign) on network activities. The correlation engines combine multiple related alerts into an “intrusion campaign.” The context engines collect data from various sources (including sources outside NSX) to add helpful context to the information provided to security analysts.
  • Advanced VPN
  • Additional VPN tunnels
  • Advanced Backup powered by Veeam. 
  • Advanced DRaaS Powered by Zerto. 
  • Microsoft SPLA and other 3rd Party Licensing
  • Advanced VMware Migration Services (vCDA). 
  • Professional Services Migration (Via Partner, Complex Migrations

Cloud Storage Powered by Pure Storage

The Hivelocity Cloud 2.0 is built using best in class storage array from Pure Storage. Pure allows our clients to utilize various tiers of storage to ensure their applications have the performance they need depending on workload as well as more budget friendly options to ensure our client’s data retention policies are met.  

  • Performance Tier 1 Storage
  • Standard Tier 2 Storage
  • Backup Tier 3 Storage

Performance Tier 1 Storage:

Experience unparalleled performance Hivelocity’s Performance Tier, designed to meet the demanding requirements of modern businesses. Our cutting-edge flash storage technology ensures lightning-fast access to your data, delivering the speed and responsiveness needed for critical applications. With ultra-low latency and high throughput, the Performance Tier empowers your organization to thrive in the era of real-time analytics and data-driven decision-making.

Key Features:

  • NVMe Flash Technology: Leverage the power of Non-Volatile Memory Express (NVMe) to unlock the full potential of flash storage, providing a quantum leap in speed and responsiveness.
  •  Predictive Analytics: Proactively address potential issues with Pure1® predictive analytics, ensuring optimal performance and minimizing disruptions.
  • Scalability: Seamlessly scale your storage infrastructure to accommodate growing data demands without sacrificing performance or worrying about procuring additional hardware as your storage requirements grow.

Standard Tier: Reliable and Cost-Efficient Storage Solutions

Hivelocity’s Standard Tier offers a robust and reliable storage solution that balances performance with cost-effectiveness. Ideal for a wide range of workloads, this tier provides a cost-efficient way to store and manage your data without compromising on quality or reliability. Whether you’re running business applications, virtualized environments, or databases, the Standard Tier delivers the reliability you need at a price point that makes sense for your budget.

Key Features:

  • All-Flash Array: Benefit from the speed and efficiency of all-flash storage, ensuring consistent performance across diverse workloads.
  • Data Reduction: Maximize storage efficiency with inline deduplication and compression, reducing your overall storage footprint and optimizing costs.
  • Reliability: Rely on Pure Storage’s proven track record for high availability and data integrity, minimizing the risk of downtime or data loss.

Backup Tier: Safeguarding Your Data Assets

Ensure the resilience and security of your data with Pure Storage’s Backup Tier. This tier is specifically designed to address the critical need for data protection, providing robust backup and recovery capabilities. With comprehensive features such as snapshot technology, data replication, and integration with leading backup solutions, the Backup Tier offers a solid foundation for building a reliable data protection strategy.

Key Features:

  • Snapshots and Replication: Create point-in-time snapshots for rapid data recovery and replicate data across geographically dispersed locations to ensure business continuity.
  • Integration with Backup Solutions: Seamlessly integrate with leading backup solutions, streamlining your backup and recovery processes.
  • Compliance and Security: Adhere to regulatory requirements and enhance data security with encryption, access controls, and audit trails.

Data Protection Services Powered by Veeam

Safeguard your VMware virtualized infrastructure with Hivelocity’s Data Protection service (Powered by Veeam) designed to address the unique challenges of VMware environments. Veeam Backup for VMware combines powerful features with seamless integration, providing comprehensive data protection tailored specifically for VMware-based workloads. Elevate your virtualization strategy with Veeam’s advanced backup options and ensure the availability, reliability, and recoverability of your critical data.

Service Offerings

  • Veeam Backup & Replication for VMware: Veeam’s flagship solution, Backup & Replication, offers specialized capabilities for VMware environments, providing seamless backup, replication, and recovery processes. Ensure the protection of your virtual machines (VMs) with a solution optimized for VMware’s unique architecture.
  • Veeam Explorer for VMware: Gain granular visibility into your VMware backups with Veeam Explorer, allowing for efficient recovery of individual items, such as files or application objects, directly from the backup.
  • VMware vSphere Integration: Benefit from tight integration with VMware vSphere, leveraging Veeam’s capabilities to enhance your vSphere environment’s data protection and recovery.

Key Backup Options

  • Image-Based VM Backups: Veeam’s image-based backup approach captures entire VM images, ensuring comprehensive protection and enabling efficient recovery of entire VMs.
  • Incremental Backups with Advanced Deduplication: Minimize backup storage requirements and optimize performance with Veeam’s advanced deduplication technology, capturing only changed data since the last backup.
  • Instant VM Recovery: Reduce downtime with Veeam’s instant VM recovery, allowing you to restart failed VMs directly from a backup file in minutes.
  • Application-Aware Processing: Ensure consistent and reliable backups of applications running in VMs with Veeam’s application-aware processing, supporting applications like Microsoft Exchange, SQL Server, and Active Directory.
  • VeeamZIP for Quick Ad-Hoc Backups: Perform ad-hoc backups of VMs with VeeamZIP, providing a quick and easy way to create point-in-time backups for testing, development, or archival purposes.
  • SureBackup Verification: Validate the recoverability of your backups with Veeam’s SureBackup, automatically verifying the integrity of VM backups and ensuring they can be successfully recovered.

Secure your VMware virtualized environment with confidence, leveraging Veeam’s tailored backup solutions. Whether you’re dealing with data loss, system failures, or simply need to ensure compliance, Veeam Backup for VMware environments delivers the reliability and flexibility your organization requires for efficient data protection.

Getting Started

How to:

Virtual Machines:

vApps

The post Private Cloud Product Guide: VMware by Broadcom appeared first on Hivelocity Hosting.

]]>
How to Partition and Mount Storage Devices in Linux https://www.hivelocity.net/kb/how-to-partition-and-mount-storage-devices-in-linux/ Thu, 03 Nov 2022 18:50:22 +0000 https://www.hivelocity.net/?post_type=hv_knowledgebase&p=22430 When a new drive is installed in your system, before it can be used for storage, the first step is to get it initialized. This process involves identifying the drive within the system, setting a partition table and creating partitions, creating a filesystem for each partition, mounting the partition, and lastly, ensuring that the mounting …

How to Partition and Mount Storage Devices in Linux Read More »

The post How to Partition and Mount Storage Devices in Linux appeared first on Hivelocity Hosting.

]]>
When a new drive is installed in your system, before it can be used for storage, the first step is to get it initialized. This process involves identifying the drive within the system, setting a partition table and creating partitions, creating a filesystem for each partition, mounting the partition, and lastly, ensuring that the mounting instructions are saved for when the system reboots (also known as mount persistence). In the following tutorial, we’ll show you step-by-step instructions to identify, partition, and mount your new storage drive so it’s ready for use.

Partitioning and Mounting Your Storage Devices in Linux

The following sections and commands will provide you with the information required to partition your storage devices and allow you to begin using them in your Linux system.

Identifying the Disk

The first necessary step when working on your new storage drive is to identify which drive you would like to work on. It’s very important to get this step right, so please follow along closely with the instructions listed below.

  1. First, ensure to install the parted tool from your package manager onto your system if it’s not already available. This can be done using the command:

    yum install parted

  2. Next, run the following command to list out all available drives in your system:

    lsblk

    Screenshot showing the results of the "lsblk" command

  3. Once you’ve found the drive you’d like to work with, use the command below to retrieve the new drive in your system. In this case, the drive we’re using is sdc as we know that it’s the new drive we’ve just introduced to the system. *Note: the following command will not work with drives that are already in use.

    sudo parted -l | grep error

    Screenshot showing the results of the "sudo parted -l | grep error" command

Setting a Partition Table and Distributing/Creating Partitions

The second step of the process involves creating a partition table and the actual partitions which will be used to divide up and store data on the drive.

  1. First, you need to set the partition table/disk label for the drive. For this example, for more modern systems, we will go with GPT (GUID Partition Table), but for those with older drives/systems MBR might be the better option. In most cases though, these days GPT is the one to select. *Note: in this example, /dev/sdc is the drive we are working with, but that will likely not be the same in your system. Please be sure to list the name of your device when entering the following commands.

    1. For the GUID partition table (GPT), use the command:

      sudo parted /dev/sdc mklabel gpt

    2. For the MBR partition table, use the command:

      sudo parted /dev/sdc mklabel msdos

      Screenshot showing the results of the "sudo parted /dev/sdc mklabel gpt" command

  2. Next, you will need to create new partitions on the drive. Here you can create multiple partitions which will later become /dev/sdc1, /dev/sdc2, /dev/sdc3 and so on.

    *Note: keep the following in mind when using the command below:

    • The percentages are used to describe the amount of space on each partition.
    • -a opt in the command sets the alignment type which should be set to optimal in order to align to multiple physical block sizes to guarantee performance.
    • EXT4 is used here as it is the default filesystem that we intend to use with these partitions.

      Now, run the following commands to begin partitioning. In this example, we will create 3 separate partitions. Please take not of the percentages used. If you intend to use one single partition that will span your entire drive, ensure that your percentages are set to “0% 100%” instead.

      sudo parted -a opt /dev/sdc mkpart primary ext4 0% 30%

      sudo parted -a opt /dev/sdc mkpart primary ext4 30% 75%

      sudo parted -a opt /dev/sdc mkpart primary ext4 75% 100%

      Screenshot showing the results of the multiple partitioning commands listed above

      Screenshot showing the new partitions listed out using the "lsblk" command

Creating a Filesystem for Each Partition

The next part of the process requires you to create a filesystem for each partition that’s been created.

  1. To create a filesystem on the partitions you’ve just created, run the command listed below.

    *Note: keep the following in mind when using the command shown below:

    • -L <Label-Name> will only add a description to the partition and is not mandatory.
    • Please note the ext4 portion, as this will also determine the type of filesystem created, which if desired can instead be xfs, ext3, and so on.
    • Notice that in this example we are selecting a partition (sdc1) and not the disk (sdc).
    • Lastly, the same command below will apply to the sdc2 and sdc3 partitions we created, except with their different labels inserted into the command.

      sudo mkfs.ext4 -L Photos /dev/sdc1

      Screenshot showing the results of the "sudo mkfs.ext4 -L Photos /dev/sdc1" command

  2. Next, you can run the following command to view the filesystems we’ve just created:

    lsblk -fs

    Screenshot showing the results of the "lsblk -fs" command

Mounting the Newly Created Partitions

Now that your partitions and filesystems have been created, the last part of this process is to mount the partitions we’ve created into your desired directories.

  1. First, create a new directory which will be used to mount and connect the chosen partition. In this example we will continue using /dec/sdc1.

    sudo mkdir /Photos

  2. After creating the new directory, we must now mount the partition to this new directory.

    1. The command structure is mount -o defaults <SourcePartition> <DestinationDirectory>.

      sudo mount -o defaults /dev/sdc1 /Photos

      Screenshot showing the results of the "sudo mount -o defaults /dev/sdc1 /Photos" command

  3. Now, we need to ensure that the mounting point will be persistent throughout future reboots. To do this, we will need the UUID of the partition. To get the UUID, use the following command:

    blkid

    The output you receive will be similar to the text below. The example UUID has been highlighted here, and you will need to copy this value from your own results for the command that follows in step 4.

    /dev/sdc1: LABEL=”Photos” UUID=”2aa5a1a1-8805-44f6-881b-32e866703005″ BLOCK_SIZE=”4096″ TYPE=”ext4″ PARTLABEL=”primary” PARTUUID=”39754fad-7d45-4c13-be15-e8e9b1802f5e”

  4. Now that you have the partition’s UUID, edit the /etc/fstab file and enter the values as shown below to ensure that after each reboot, the mounting point will remain persistent. *Note: be sure to save the /etc/fstab file once you’ve finished making edits.

    • *Note: If you plan to remove the drive in the future, ensure to remove or comment (#) out the entry in the file.

      The values entered are as follows:

      UUID=”UUID-VALUE-HERE” /<Directory> <Filesystem> defaults 1 2

      Screenshot showing edits being made to the /etc/fstab file to include the UUID of the newly created partition

  5. Now that you’ve completed making your edits and saved the file, you can test the file and mounting points using the following command.

    sudo mount -a

    Then, simply repeat this process for any remaining partitions.

-written by Pascal Suissa

The post How to Partition and Mount Storage Devices in Linux appeared first on Hivelocity Hosting.

]]>
S.M.A.R.T Data Reports – Evaluating Linux Storage Drive Health https://www.hivelocity.net/kb/smart-data-reports-evaluating-linux-storage-drive-health/ Thu, 03 Nov 2022 17:32:40 +0000 https://www.hivelocity.net/?post_type=hv_knowledgebase&p=22422 Drive health in your Linux system can be evaluated and retrieved using various packages available within your operating system’s package manager. Drive health information is available primarily through a Self-Monitoring, Analysis and Reporting Technology [SMART] monitoring system, which is available in both hard-disk drives and solid-state drives. While the SMART data may not accurately predict …

S.M.A.R.T Data Reports – Evaluating Linux Storage Drive Health Read More »

The post S.M.A.R.T Data Reports – Evaluating Linux Storage Drive Health appeared first on Hivelocity Hosting.

]]>
Drive health in your Linux system can be evaluated and retrieved using various packages available within your operating system’s package manager. Drive health information is available primarily through a Self-Monitoring, Analysis and Reporting Technology [SMART] monitoring system, which is available in both hard-disk drives and solid-state drives. While the SMART data may not accurately predict a future drive failure, it can show abnormal error rates and provide important information that will assist you in making decisions that might save your data before suffering a drive failure.

Retrieving S.M.A.R.T Data

The commands listed in the sections below will provide you with information regarding the S.M.A.R.T data of your storage devices and their current health conditions. The following sections are divided by applicable device type.

Evaluating SATA Hard-Disk Drives and SATA SSDs

This section includes instructions for generating reports for SATA hard drives and is applicable to both spinning plate Hard-disks and Solid-state SSDs with no moving parts. The information for both types are gathered using the same smartmontools package.

  1. First, if the package is not already available in your system, install smartmontools from your package manager using the following command:

    yum install smartmontools

    Screenshot showing the installation process of the smartmontools package

  2. Next, run the following command to list all the drives in the system that are available for evaluation:

    lsblk

    Screenshot showing the results of the "lsblk" command with a list of drives available for scanning

  3. Once you’ve found the drive you’d like to evaluate, grab the name of it and run the command listed below. The smartctl tool will then output the results of the S.M.A.R.T data along with further information such as how long the drive has been running, how many errors it has, and whether the S.M.A.R.T drive health test has passed or not. In this case, the example drive is /dev/sda. *Note: we are selecting drives here [sda, sdb, sdc], not partitions [sda1,sdb4, sdc7].

    sudo smartctl -a /dev/sda

    Screenshot showing the results of the "sudo smartctl -a /dev/sda" command

Evaluating NVMe Solid-State Drives (M.2 SSD)

This section includes instructions for generating reports for NVMe drives, which require a different package to retrieve their health information. In this particular case, we’ll be using the nvme-cli package.

  1. First, check that the nvme-cli package is available on your system already. If not, install it with your package manager using the command:

    yum install nvme-cli

  2. Next, use the following command to retrieve a list of your available NVMe devices:

    nvme list

  3. To retrieve the health results of a specific drive you’d like to evaluate, use the command listed below. In this particular example, the drive we’re checking is /dev/nvme6n1.

    nvme smart-log /dev/nvme0n1

  4. Lastly, use following command to retrieve the error logs of the drive in question.

    nvme error-log /dev/nvme0n1

Performing Drive Tests on SATA Hard-Disk Drives and SSDs

The smartmontools package smartctl tool allows users to perform four tests that evaluate the input/output [IO] capabilities of their storage device. The four tests are listed below and can be performed to evaluate drive health and performance.

*Note: please be advised that if the disk is not in good condition and has a FAILED SMART status, then these tests will only put further stress on the device and are therefore not recommended. If you have any concerns regarding this, please reach out to our Technical Support for further assistance.

  1. The first test is a short test which performs a short write test on the drive to evaluate for errors. In this particular example the drive we’re checking is /dev/sda.

    sudo smartctl -t short /dev/sda

    Screenshot showing the results of the "sudo smartctl -t short /dev/sda" command

  2. The second test is a long test which performs a longer write test on the drive to evaluate for errors. Once again, the drive we’re checking in this example is /dev/sda.

    sudo smartctl -t long /dev/sda

    Screenshot showing the results of the "sudo smartctl -t long /dev/sda" command

  3. The third test is a conveyance test (normally used for PATA Drives) which performs a test to check for possible damages that can occur during device transport.

    sudo smartctl -t conveyance /dev/sda

    Screenshot showing the results of the "sudo smartctl -t conveyance /dev/sda" command

  4. The fourth test is a Select test (normally used for PATA drives) which is meant to check only a specified range of logical block addresses (LBAs).

    sudo smartctl -t select,10-20 /dev/sda

    Screenshot showing the results of the "sudo smartctl -t select,10-20 /dev/sda" command

Once these tests have been completed, results for each test can be found by running the following command (the results are located at the bottom of the report):

sudo smartctl -a /dev/sda

*Note: to abort any test after it has begun running, use the following command:

smartctl -X /dev/sda

Evaluating the S.M.A.R.T DATA Report

The reports given by the smartmontools package smartctl tool can be intimidating at first. In the following section we’ve provided information which should help you make sense of these reports and take action when necessary.

First and foremost, don’t panic. The attributes and values you see in the report might make you think your drive is in trouble, but it’s important to understand that all drives will eventually have some less-than-ideal values showing for certain attributes. This alone does not necessarily mean that your drive is in trouble.

Two important notes:

  1. Some of the items listed below might not exist in your report as some of these are brand-dependent.
  2. The items listed below are not a complete list of all attributes which may be shown. These are however the attributes that require the most attention when reading the report.

Important S.M.A.R.T. Report Attributes and What They Mean

  1. Read Error Rate – (the lower this value the better) – this shows the rate of read errors occurring when a disk is being read.
  2. Throughput Performance – (the higher this value the better) – this shows the overall performance of the hard disk drive which ideally should not be a lower value than what is normally seen.
  3. Spin-Up Time – (the lower this value the better) -This shows the average time it takes for the drive to become fully operational.
  4. Reallocated Sectors Count – (the lower this value the better) – This shows the count of bad sectors that have been found and remapped. *Note: A high value of reallocated sectors with a large amount observed when looking at daily trends can indicate a possible drive failure.
  5. Current Pending Sector Count – (the lower this value the better) – This shows the count of sectors waiting to be remapped because of unrecoverable read errors.
  6. Seek Error Rate – (the lower this value the better) – This shows the rate of seek errors found on the magnetic heads within the drive. *Note: high values in this attribute indicate potential failure with the mechanical positioning system.
  7. Seek Time Performance – (the higher this value the better) – This shows the performance of the drive’s seek operations.
  8. Power-on HoursThis shows the count of hours that the drive has on record for the power-on state. *Note: the recommended value for life expectancy of a drive depends on the drive brand and model.
  9. Spin Retry Count – (the lower this value the better) – This shows the count of spin start retries as the drive attempts to reach full operational speed.
  10. Reported Uncorrectable Errors(the lower this value the better) – This shows the count of errors that could not be recovered using the hardware’s error correcting code.
  11. Command Timeout – (the lower this value the better) – This shows the count of aborted operations to hard-disk drive timeout.
  12. Reallocation Event Count – (the lower this value the better) – This shows the total count of attempts to transfer data from reallocated sectors to spare areas.
  13. Soft Read Error Rate or TA Counter Detected – (the lower these values the better) – These indicate the number of uncorrectable read errors.

-written by Pascal Suissa

The post S.M.A.R.T Data Reports – Evaluating Linux Storage Drive Health appeared first on Hivelocity Hosting.

]]>
How to Use Wasabi Cloud Storage with cPanel https://www.hivelocity.net/kb/how-to-use-wasabi-cloud-storage-with-cpanel/ Thu, 03 Mar 2022 15:41:52 +0000 https://www.hivelocity.net/?post_type=hv_knowledgebase&p=20745 In order to use Wasabi Cloud Storage alongside cPanel, you must first set up your storage bucket within Wasabi. If you need help setting this up, please refer to: How to Setup Wasabi Cloud Storage. Once you’ve created your bucket in Wasabi, you’ll need to save the following details somewhere you can easily access them …

How to Use Wasabi Cloud Storage with cPanel Read More »

The post How to Use Wasabi Cloud Storage with cPanel appeared first on Hivelocity Hosting.

]]>
In order to use Wasabi Cloud Storage alongside cPanel, you must first set up your storage bucket within Wasabi. If you need help setting this up, please refer to: How to Setup Wasabi Cloud Storage.

Once you’ve created your bucket in Wasabi, you’ll need to save the following details somewhere you can easily access them for use with cPanel:

  • Bucket name: Must begin with lowercase letter or number.
  • Region:
  • Access Key ID:
  • Secret Access Key: For security reasons, you will not see the Secret Access Key. Only enter the Secret Access Key when you create this destination, or when you change the Secret Access Key.

*Note: S3 Endpoint and region should be chosen based off your server’s device location.

This is the list of the regions and the service URLs for Wasabi:

What are the service URLs for Wasabi’s different storage regions? – Wasabi Knowledge Base (zendesk.com)

The Wasabi service URLs are as follows:

Region

Service URL

Alias/Alternative URL

Wasabi US East 1 (N. Virginia)

http://s3.wasabisys.com

http://s3.us-east-1.wasabisys.com

Wasabi US East 2 (N. Virginia)

http://s3.us-east-2.wasabisys.com

 

Wasabi US Central 1 (Texas)

http://s3.us-central-1.wasabisys.com

 

Wasabi US West 1 (Oregon)

http://s3.us-west-1.wasabisys.com

 

Wasabi EU Central 1 (Amsterdam)

http://s3.eu-central-1.wasabisys.com

http://s3.nl-1.wasabisys.com

Wasabi EU West 1 (London)

http://s3.eu-west-1.wasabisys.com

http://s3.uk-1.wasabisys.com

Wasabi EU West 2 (Paris)

http://s3.eu-west-2.wasabisys.com

http://s3.fr-1.wasabisys.com

Wasabi AP Northeast 1 (Tokyo)

http://s3.ap-northeast-1.wasabisys.com

 

Wasabi AP Northeast 2 (Osaka)

http://s3.ap-northeast-2.wasabisys.com

 

Setting up Wasabi in cPanel

To set up Wasabi Storage with cPanel, just follow these steps:

  1. First, log in as the root user in WHM and go to the backup config section:

    Home »Backup »Backup Configuration

    Screenshot of the WHM interface highlighting the option to Create New Destination

  2. Next, click the button labelled Create New Destination and fill in the form below with the backup information that was created in wasabi for the bucket information.

    Screenshot of the Create New Destination screen highlighting the Destination Name field

  3. Once you have the details filled in, you should now save and validate the destination to confirm all information is valid and working.
    • If it’s valid, you can enable the backup and the next time it runs it will upload the backups to the wasabi bucket.
    • If not, review the steps above and confirm its setup correctly.

*Note: To force cPanel backup to run, you can use the following command in ssh.

/usr/local/cpanel/bin/backup –force

Troubleshooting

If you believe your firewall could be causing issues, review the steps below for CSF hostname whitelisting.

For CSF and DNS/IP whitelisting, use the following steps to add to CSF:

  1. Open the file /etc/csf/csf.dyndns and add the hostname of the region you are connecting to. You can also whitelist all regions using the list below:

    s3.wasabisys.com
    s3.us-east-1.wasabisys.com
    s3.us-east-2.wasabisys.com
    s3.us-central-1.wasabisys.com
    s3.us-west-1.wasabisys.com
    s3.eu-central-1.wasabisys.com
    s3.nl-1.wasabisys.com
    s3.eu-west-1.wasabisys.com
    s3.uk-1.wasabisys.com
    s3.eu-west-2.wasabisys.com
    s3.fr-1.wasabisys.com
    s3.ap-northeast-1.wasabisys.com
    s3.ap-northeast-2.wasabisys.com

  2. Open the file /etc/csf/csf.conf and set DYNDNS = “300” (which would check for IP updates every 5 minutes).

    *Note: If you want the activity of the IP also ignored, set
    DYNDNS_IGNORE = “1”

  3. When you’ve finished making your changes, restart the firewall

The post How to Use Wasabi Cloud Storage with cPanel appeared first on Hivelocity Hosting.

]]>
How to Setup Wasabi Cloud Storage https://www.hivelocity.net/kb/how-to-setup-wasabi-cloud-storage/ Thu, 24 Feb 2022 17:09:47 +0000 https://www.hivelocity.net/?post_type=hv_knowledgebase&p=20727 Setting Up Wasabi Cloud Storage First, go to https://console.wasabisys.com/#/login and log in. When you log in to Wasabi the interface will look like the screenshot below. Although you have many options, the 1st thing you have to do is create a bucket to use the storage. During creation you have to name your Bucket and …

How to Setup Wasabi Cloud Storage Read More »

The post How to Setup Wasabi Cloud Storage appeared first on Hivelocity Hosting.

]]>
Setting Up Wasabi Cloud Storage
  1. First, go to https://console.wasabisys.com/#/login and log in.
  2. When you log in to Wasabi the interface will look like the screenshot below. Although you have many options, the 1st thing you have to do is create a bucket to use the storage.

    Screenshot of the Wasabi control panel showing no buckets have been created yet

  3. During creation you have to name your Bucket and Select a Region.

    Screenshot of the Bucket Naming screen highlighting the available region choices

    1. The bucket has to be unique to avoid errors, so enter a unique name in the Bucket Name field shown above.
    2. Select the closest region to the server that will be using it or ask the client via ticket if you are unsure.
      • us-west-1 – Oregon
      • ap-northeast-1 – Toykyo
      • ap-northeast-2 – Osaka
      • eu-central-1 – Amsterdam
      • eu-west-1 – London
      • eu-west-2 – Paris
      • us-central-1 – Plano, Texas
      • us-east-1 – North Virginia
      • us-east-2 – North Virginia
  4. Next, in Step 2, Set Properties, leave all options off, and click Next.

    Screenshot of the Set Properties screen showing all options left off

  5. Step 3 is to Review your selections. If everything looks correct, click Create Bucket.

    Screenshot of the Review screen highlighting the Create Bucket icon

  6. After creating your new bucket it will look like this:

    Screenshot of the Wasabi interface showing the Bucket List and the newly created bucket

  7. Now, click the bucket name and go to Policies.
  8. You can look at Sample Policies on the left.

    Screenshot of the Policies screen

  9. Setup a policy for your bucket. This is Full Access:

    Screenshot of the Bucket policy editor showing an example of Full Access permissions

    1. Give permission to the bucket:
      {
      “Version”: “2012-10-17”,
      “Statement”: [
      {
      “Effect”: “Allow”,
      “Action”: “s3:“,
      “Resource”: “
      }
      ]
      }
  10. Once your permissions are set, you’ve successfully set up storage in Wasabi.

Disabling / Enabling Public Access

To ensure that your new bucket has public access disabled or to enable public access for buckets you want to have accessible to anyone:

  1. On your Bucket List, click on the 3 vertical dots to the right of your bucket under the Actions column, and select Settings.

    Screenshot of the Wasabi interface showing the Bucket List and the newly created bucket

  2. On the Properties panel, find the option for Public Access Override and open the drop-down menu.
  3. Click the switch labelled Turn on override. This will override the bucket’s policies to allow for the enabling/disabling of public access.

    *Note: To undo and return the bucket to its default policy, click the switch again to Turn off override.

  4. Once override is enabled, click the radial buttons below to either Enable Public Access or Disable Public Access depending on your desired outcome.
  5. Once you’ve finished, return to your Bucket List. You should now see the status under the Public Access column listed as either Enabled or Disabled, depending on your selection.

Now that your bucket is created and your policies are set, the next step is to set Wasabi up on your server.

The post How to Setup Wasabi Cloud Storage appeared first on Hivelocity Hosting.

]]>
Wasabi Explorer for Windows https://www.hivelocity.net/kb/wasabi-explorer-for-windows/ Thu, 24 Feb 2022 15:20:32 +0000 https://www.hivelocity.net/?post_type=hv_knowledgebase&p=20714 How Do I Use Wasabi Explorer for Windows with Wasabi? Wasabi Explorer is a free app that enables you to share files between your Windows host and the Wasabi hot cloud storage service. This app is a version of CloudBerry Explorer from MSP360 that has been customized for use with Wasabi hot cloud storage. Wasabi …

Wasabi Explorer for Windows Read More »

The post Wasabi Explorer for Windows appeared first on Hivelocity Hosting.

]]>
How Do I Use Wasabi Explorer for Windows with Wasabi?

Wasabi Explorer is a free app that enables you to share files between your Windows host and the Wasabi hot cloud storage service. This app is a version of CloudBerry Explorer from MSP360 that has been customized for use with Wasabi hot cloud storage. Wasabi Explorer provides a user interface to your Wasabi storage account by allowing you to access, move, and manage files across your local storage and your Wasabi storage buckets.

Wasabi Explorer’s features include:

  • Familiar Windows File Explorer-like user interface
  • Compression and encryption
  • Ability to access multiple Wasabi storage regions
  • Upload rules
  • Header editing
  • Multi-part uploads

*Note: if you want an app that lets you not only share Wasabi files with your Windows host but also mount your Wasabi buckets as a disk volume on your Windows host, you may wish to consider CloudBerry Drive, a paid app from CloudBerry Lab / MSP360.

In this article we’ll cover the installation instructions for the Windows version of Wasabi Explorer for Cloud Storage. An installation tutorial video is also provided here.

Prerequisites:

  • A valid Wasabi storage account and Wasabi API key set
  • Minimum Windows OS:
    • Windows 7/8/10
    • Windows Server 2008 or higher
  • Minimum System requirements
    • Microsoft .NET Framework 4.0,
    • 1.4 GHz 64-bit processor
    • 512 MB RAM, 100 MB minimum disk space, Gigabit (10/100/1000baseT) Ethernet adapter

Installation Instructions:

  1. First, download the Wasabi Explorer install package (Updated Feb 14th 2022 – v6.2.2.10 Windows only)

  2. Once your download is complete, install Wasabi Explorer by following the prompts in the installation package. The workflow will look like the images below:

    Screenshot of the Wasabi Explorer Installation Wizard

    Screenshot of the Wasabi Explorer End User License Agreement

    Screenshot highlighting the Install Location screen

    Screenshot of the Completion screen in the Wasabi Explorer Installation Wizard

  3. Now that Wasabi Explorer has been successfully installed, we must register the product. When prompted to enter in an email address, you can use the same email used with your Wasabi account.

    Screenshot of the Wasabi Explorer registration screen and the enter Email prompt

  4. Now, enter a valid access key and secret key from your Wasabi account to connect the Wasabi Explorer application to your storage account.

    Screenshot of the Add New Wasabi Account screen

    *Note: You can use the Test Connection button to verify Wasabi Explorer can talk to Wasabi and the API key set is validated. You will receive the Connection Success message if connectivity and key validation is successful.

    Screenshot showing the Connection Success screen

  5. Click OK to close the window.

  6. Once the Wasabi connection is built, you will see Wasabi show up on the list of Registered Accounts as shown below:

    Screenshot showing the list of Registered Accounts in Wasabi

  7. To start using Wasabi Explorer to transfer files from your PC to Wasabi, select My Computer as the Source and Wasabi as the Destination as shown in the example below:

    Screenshot of the Wasabi Explorer Interface

  8. At this point, Wasabi Explorer is ready for use, and all of the existing Wasabi storage buckets are now available. If you wish to create a new bucket, you can do this from the Wasabi storage pane by selecting the blue cube (on the lower right) as shown below:

    Screenshot showing how to create a new storage bucket within the Wasabi Explorer interface

  9. Enter a unique Bucket name and select the appropriate region for the bucket:

    Screenshot of the Create New Bucket screen highlighting options for Bucket Locations

And there you have it!

The post Wasabi Explorer for Windows appeared first on Hivelocity Hosting.

]]>
Mount Wasabi on Linux Using s3fs-fuse https://www.hivelocity.net/kb/mount-wasabi-on-linux-using-s3fs-fuse/ Thu, 24 Feb 2022 15:19:40 +0000 https://www.hivelocity.net/?post_type=hv_knowledgebase&p=20726 How Do I Use S3FS with Wasabi? S3FS (fuse) is certified for use with Wasabi. To use S3FS with Wasabi, please follow the command syntax example below. This tutorial may also be helpful. Setup Access Key To configure S3FS, you will need both the access key and secret key of your s3 AWS account. First, …

Mount Wasabi on Linux Using s3fs-fuse Read More »

The post Mount Wasabi on Linux Using s3fs-fuse appeared first on Hivelocity Hosting.

]]>
How Do I Use S3FS with Wasabi?

S3FS (fuse) is certified for use with Wasabi. To use S3FS with Wasabi, please follow the command syntax example below. This tutorial may also be helpful.

Setup Access Key

To configure S3FS, you will need both the access key and secret key of your s3 AWS account.

  1. First, replace the AWS_ACCESS_KEY_ID and AWS_SECRET_ACCESS_KEY shown below with your actual Wasabi access key and secret key values.

    $ vi /etc/passwd-s3fs AWS_ACCESS_KEY_ID:AWS_SECRET_ACCESS_KEY

  2. Next, make sure that the file has the proper permissions using:

    1$ chmod 600 /etc/passwd-s3fs

Mounting an S3 Bucket

To mount s3fs, just run the command listed below:

1s3fs storage5555 /s3mnt -o passwd_file=/etc/passwd-s3fs -o url=https://s3.wasabisys.com

You can also mount Wasabi via /etc/fstab by adding the following line:

1s3fs#storage5555 /s3mnt fuse _netdev,allow_other,use_cache=/cache,url=https://s3.wasabisys.com 0 0

You should now be able to use the mounted Wasabi Cloud Storage.

*Note: this example discusses the use of Wasabi’s us-east-1 storage region. To use other Wasabi storage regions, please use the appropriate Wasabi service URL as described in this article.

The post Mount Wasabi on Linux Using s3fs-fuse appeared first on Hivelocity Hosting.

]]>
How to use tar command on linux server? https://www.hivelocity.net/kb/how-to-use-tar-command-on-linux-server/ https://www.hivelocity.net/kb/how-to-use-tar-command-on-linux-server/#respond Wed, 30 Jan 2013 13:52:42 +0000 https://kb.hivelocity.net/?p=10985 “Untar” a file If you are working with example.tar file, you can extract the files from it using:   tar xvf example.tar   If you are working with gzipped(example.tar.gz), you can extract the files from it using:   tar xvfz example.tar.gz   If you have example.tgz, you can extract the files from it using:   …

How to use tar command on linux server? Read More »

The post How to use tar command on linux server? appeared first on Hivelocity Hosting.

]]>
“Untar” a file
If you are working with example.tar file, you can extract the files from it using:

 

tar xvf example.tar

 

If you are working with gzipped(example.tar.gz), you can extract the files from it using:

 

tar xvfz example.tar.gz

 

If you have example.tgz, you can extract the files from it using:

 

tar xzvf example.tgz

If the tarball has been compressed with bzip2(example.tar.bz2), then you will need to

have bzip2 installed. (If all is well and bzip2 is installed, you can extract the files from it

using:
tar yxf example.tar.bz2

Sometimes you only want to extract certain directories from the tarball.

An example of doing so would be:

tar xvzf example.tar.gz */DIRECTORY_YOU_WANT_REPLACES_THIS_TEXT/*

 

If you would like to see what is inside a tarball, you can use the command:

tar tvf example.tar

 

If you would like to see what is inside a gzip’d tarball, you can use the command:

tar tzf example.tar.gz

The post How to use tar command on linux server? appeared first on Hivelocity Hosting.

]]>
https://www.hivelocity.net/kb/how-to-use-tar-command-on-linux-server/feed/ 0
How to take the backup of MBR on linux server? https://www.hivelocity.net/kb/how-to-take-the-backup-of-mbr-on-linux-server/ https://www.hivelocity.net/kb/how-to-take-the-backup-of-mbr-on-linux-server/#respond Wed, 30 Jan 2013 13:52:23 +0000 https://kb.hivelocity.net/?p=10983 Always recommended that you should take backup MBR. It is hard disk partition table on your server. We can use dd command to take a backup for MBR. ================================= [root@~]# dd if=/dev/hdX of=/tmp/hda-mbr.bin bs=512 count=1 ================================= Replace X with the actual device name. ex. /dev/hda or /dev/sda In order to restore it back on the …

How to take the backup of MBR on linux server? Read More »

The post How to take the backup of MBR on linux server? appeared first on Hivelocity Hosting.

]]>
Always recommended that you should take backup MBR.

It is hard disk partition table on your server.

We can use dd command to take a backup for MBR.

=================================
[root@~]# dd if=/dev/hdX of=/tmp/hda-mbr.bin bs=512 count=1
=================================

Replace X with the actual device name. ex. /dev/hda or /dev/sda

In order to restore it back on the server, use following command

=================================
[root@~]# dd if= hda-mbr.bin of=/hev/hdX bs=1 count=64 skip=446 seek=446
=================================

The post How to take the backup of MBR on linux server? appeared first on Hivelocity Hosting.

]]>
https://www.hivelocity.net/kb/how-to-take-the-backup-of-mbr-on-linux-server/feed/ 0
How to increase the size of /tmp partition? https://www.hivelocity.net/kb/how-to-increase-the-size-of-tmp-partition/ https://www.hivelocity.net/kb/how-to-increase-the-size-of-tmp-partition/#respond Tue, 29 Jan 2013 14:18:12 +0000 https://kb.hivelocity.net/?p=10894 Following steps could be referred for the same:- a.. First stop MySQL, Apache, and cPanel to prevent writing to the /tmp partition b. Take a backup of the current /tmp folder. c. Umount /tmp partition. If you’re unable to, you can do an lsof to see what processes are still writing to it, and kill …

How to increase the size of /tmp partition? Read More »

The post How to increase the size of /tmp partition? appeared first on Hivelocity Hosting.

]]>
Following steps could be referred for the same:-

a.. First stop MySQL, Apache, and cPanel to prevent writing to the /tmp partition

b. Take a backup of the current /tmp folder.

c. Umount /tmp partition. If you’re unable to, you can do an lsof to see what processes are still writing to it, and kill them off.

d. Delete /usr/tmpDSK

e. Now edit the file /scripts/securetmp. Search for “tmpdsksize” and set the size you want in MB.

f. Save and quit the file.

g. Now you can untar the /tmp backup

h. Start all the services.

The post How to increase the size of /tmp partition? appeared first on Hivelocity Hosting.

]]>
https://www.hivelocity.net/kb/how-to-increase-the-size-of-tmp-partition/feed/ 0
How do I manage virtual host skeleton? https://www.hivelocity.net/kb/how-do-i-manage-virtual-host-skeleton/ https://www.hivelocity.net/kb/how-do-i-manage-virtual-host-skeleton/#respond Sat, 13 Nov 2010 19:13:44 +0000 https://kb.hivelocity.net/?p=1688 Skeletons are file structure templates, which are used for fast automatic creation of predefined virtual host content when creating a physical hosting. Skeleton file may contain the following top-level directories only: * httpdocs * httpsdocs * cgi-bin * anon_ftp * error_docs All other directories will be ignored during skeleton deployment. Allowed skeleton file types are …

How do I manage virtual host skeleton? Read More »

The post How do I manage virtual host skeleton? appeared first on Hivelocity Hosting.

]]>
Skeletons are file structure templates, which are used for fast automatic creation of predefined virtual host content when creating a physical hosting.

Skeleton file may contain the following top-level directories only:

* httpdocs
* httpsdocs
* cgi-bin
* anon_ftp
* error_docs

All other directories will be ignored during skeleton deployment.

Allowed skeleton file types are *.tgz and *.zip archives.

To activate a new custom skeleton, follow these steps:

Click the Skeleton icon on the Home page. The Skeleton management page will open:

Select the archive file that contains the skeleton. Use the Browse button to locate the desired file.

Click Send File. The new skeleton will be uploaded and activated.

NOTE
Each new skeleton replaces the previously used one. Now, the new skeleton will be used in the process of creating all new physical hosting instances until it is replaced by another skeleton (new or the default one).

You can always revert to using the default skeleton. To do so, just click the Default button on the Skeleton management page. The default skeleton will replace the currently used one and will be activated.

The post How do I manage virtual host skeleton? appeared first on Hivelocity Hosting.

]]>
https://www.hivelocity.net/kb/how-do-i-manage-virtual-host-skeleton/feed/ 0