UPdate firmware on APC Schneider Electric UPS Network Management Card

This guide is to help you update the firmware on an APC UPS Network Management Card  (e.g., AP9630, AP9631, AP9635, AP9537SUM).  It is based on the official FAQ help page provided by Schneider Electric:

https://www.se.com/us/en/faqs/FA156047/

The firmware download typically comes bundled with the firmware update tool (NMCFirmwareUpdateUtility.exe) and all the necessary files (multiple .bin files).

Upon launching the tool,  you typically provide the Host (IP address), username and password. The credentials must be for the administrator account to allow file transfer using the default protocol SCP on port 22.  The tool is then supposed to take care of the rest.

When I clicked Start Update button, everything was looking good until the update failed with error:

Firmware distribution does not match target device platform; this could be due to a corrupt distribution.
Unable to update device x.x.x.x with Boot Monitor; stopping all further processing of this device!

When I tried to then access the web interface, it resulted in the following error:

The Application Was Not Able to Load
You are attempting to access a Schneider device.
There was a problem loading the application. Please login to the device via telnet for more details.

Multiple attempts at using the firmware update tool resulted in the same errors. I believe this may happen when the existing Boot Monitor version matches the one already on the network card.

What worked for me was to perform a manual update using the FTP protocol.

However, unless you enabled it in the past, you will likely need to first enable the FTP server on the card.

  1. Connect to the network card with SSH using a free SSH client such as PuTTY.
  2. Log in with the administrator credentials.
  3. Send the command
    ftp -S enable
  4. Then issue the reboot command
    reboot

When the network card comes back up, connect to it again but via FTP.  Use a dedicated client or simply open a Windows Explorer and type in the address bar: ftp://x.x.x.x replacing the x with appropriate numbers for the IP address of your card.

Now you must upload the files, one at a time, into the root directory (not in any folder), overwriting any files that may already be there. It is critical that you upload the files in the following order:

  1. Boot Montior (bootmon)
  2. APC Operating System (AOS)
  3. Application (either SUMX or SY)

Wait 20-30 seconds in between each upload to allow the device to process and perform the upgrade.  It is best to disconnect and reconnect to be sure.

Once complete, the web access should be restored and the device updated. You can confirm by viewing  the section About -> Network in the menu at the top.

Opening port 443 (HTTPS) and port 80 (HTTP) on Oracle Cloud Infrastructure (OCI)

If you found this guide, then you are in the situation of having a webserver such as (Nginx or Apache) running in OCI, but you are unable to reach the webserver.  This guide covers Ubuntu (at the time of this writing 24.04 Noble Numbat).

The first requirement is to set up the appropriate Ingress rules for the desired port in the OCI console. If you have already completed the ingress rules and are confident they were done correctly, you can skip this section of the guide.

  1. Navigate to Virtual Cloud Networks
    https://cloud.oracle.com/networking/vcns
  2. Click on your available VCN. If it doesn’t exist, you will need to create one.
  3. Navigate to the Security page.
  4. Click on your available Security List. If it doesn’t exist, you will need to create one.
  5. Navigate to Security Rules page
  6. Add a rule with the following properties
    • Source Type CIDR
    • Source CIDR 0.0.0.0/0
    • IP Protocol TCP
    • Destination Port Range [your desired port, e.g., 443]
  7. Click Add Ingress Rules button

Now that the desired port has been allowed through the virtual firewall, the next step is to adjust the firewall in the operating system.

When I was researching this issue on the web, I came across several guides that involved modifying iptables rules and making them persistent. For example:

I cant seem to open ports 443 or 80
byu/Sector-No inoraclecloud

A quick tips to people who are having issue opening ports on oracle cloud.
byu/ArtSchoolRejectedMe inoraclecloud

Oracle also has published a guide:

https://docs.oracle.com/en-us/iaas/developer-tutorials/tutorials/apache-on-ubuntu/01oci-ubuntu-apache-summary.htm

After reading through these guides, what I realized is that the Oracle version of Ubuntu came with the netfilter-persistent and iptables-persistent packages already installed.  The guides suggest you adjust the configuration of these to allow the traffic.  However, a more simple solution is to remove both packages altogether.  In this way you will rely on the virtual firewall rules and the OS will not have a software firewall.

Issue the following command to remove the packages and associated configuration:

sudo apt-get purge netfilter-persistent iptables-persistent

Then reboot the server.  When the server comes back up, the traffic should be passing through the port defined in the OCI console for the virtual firewall.

Program Klipsch Flexus Core 100 200 210 with Harmony Remote

I didn’t do my homework before purchasing a Klipsch Flexus Core setup.  After setting it up, I tried to add it to the MyHarmony software. However I found there was no result for any of the Flexus core products.  When I tried to manually program the remote codes, I found that the remote is not IR but Bluetooth!

However, there are 4 steps to get a Flexus core setup working with a Harmony remote:

  1. Enable IR on the soundbar by setting it to Installer modeFrom the following page:
    https://support.klipsch.com/hc/en-us/articles/29405500891924-Flexus-100-200-Operating-modes

    Pressing and holding Source and Vol + buttons on the bar and then short press of the Bass button on the remote.
    Press the Vol + button on the remote to toggle through all operating modes –

    Consumer Mode (CONSUMER)→ Display Mode (DISPLAY)→ Retail Mode (RETAIL)→ Installer Mode (INSTALLER)→

    Long Press Bass (>1 seconds) to confirm selection.

  2. In the MyHarmony software, add a Klipsch Cinema 400
  3. Change your Activity settings to use the “Klipsch Amp” as your audio device
  4. Sync your hub and basic functionality should now be working!

Veeam Error: Cannot proceed with the job: existing backup meta file on repository is not synchronized with the DB.

Running Veeam Backup & Replication 12 (build 12.3.2.3617), I suddenly started getting this error:

Error: Cannot proceed with the job: existing backup meta file ‘JOB_NAME.vstore’ on repository ‘REPOSITORY_NAME’ is not synchronized with the DB. To resolve this, run repository rescan

First thing I need was look more into this error to see if others had run into the same issue.  I came across this thread:

https://forums.veeam.com/veeam-backup-replication-f2/backup-repository-is-not-synchronized-with-the-db-error-t26771.html

I proceeded to follow the steps outlined:

  1. Open the Backup Infrastructure view.
  2. In the inventory pane, select the Backup Repositories node.
  3. In the working area, select the backup repository and click Rescan Repository on the ribbon or right-click the backup repository and select Rescan repository.

When it was complete, I retried the job.  However it failed again with the exact same error.

I did some more digging and came across this thread:

https://community.veeam.com/discussion-boards-66/repository-rescan-failing-9144

It discussed renaming a file listed in the Veeam error message solved the issue for them.  In this case I went looking for the .vstore file which I found on the backup repository itself, located inside the folder with the job name.

What I discovered is that the file was completely empty, but there was another file at this location with an identical name but extension .vstorecopy. This file was not completely empty.

First I tried renaming the .vstore file to .vstore.bak.  When I retried the job, it failed like the others.

Next I tried renaming the .vstorecopy to .vstore and retrying the job. That also failed with the same error.

What finally fixed it for me was to “reconfigure” the backup repository itself that is listed in the error. A combination of a repository rescan plus a repository reconfiguration was needed to totally resolve the issue.

Here are the steps:

  1. Open the Backup Infrastructure view.
  2. In the inventory pane, select the Backup Repositories node.
  3. In the working area, select the backup repository and click Edit Repository on the ribbon or right-click the backup repository and select Properties.
  4. Run through the configuration clicking the Next button and changing no settings. On the Repository section, be sure to click the Populate button and allow it to calculate Capacity and Free space before continuing.  At the end of the configuration, click the Apply and Finish buttons.

When you retry the job, it should complete without any errors!

Nextcloud Internal Server Error after upgrading Ubuntu from 22.04 to 24.04

After upgrading a server from Ubuntu 22.04 (Jammy Jellyfish) to 24.04 (Noble Numbat), a Nextcloud installation may break with the following error:

Internal Server Error 

The server encountered an internal error and was unable to complete your request.
Please contact the server administrator if this error reappears multiple times, please include the technical details below in your report.
More details can be found in the server log.

The log file is stored in the data folder of the Nextcloud installation folder (/data/nextcloud.log).

Inspecting this file revealed the following error:

Exception”:”Doctrine\\DBAL\\Exception”,”Message”:”Failed to connect to the database: An exception occurred in the driver: could not find driver

The upgrade to Ubuntu 24.04 installs PHP version 8.3.  In some cases your system may have multiple versions of PHP installed which can cause issues during a Ubuntu release upgrade. You can check which version of PHP is set as the system default with the following command:

sudo update-alternatives --config php

Now check that Apache or Nginx are using the same version.

For Apache, you can check the version of PHP that is loaded with this command

ls /etc/apache2/mods-enabled/

To change the PHP version, use the following commands. Change X.X to appropriate value if it is not 8.3.

sudo a2dismod phpX.X
sudo a2enmod php8.3

Then restart Apache.

For Nextcloud and PHP-FPM, check the config files for enabled sites at this location

/etc/nginx/sites-enabled

For example, change any lines containing the old version /var/run/php/php8.x-fpm.sock to point to the new version/var/run/php/php8.3-fpm.sock. Then restart Nginx:

With PHP out of the way, check that you have the correct PHP modules installed.   Review your Nextcloud config file in the config folder of the Nextcloud installation folder (/config/config.php).

You’ll need the correct module based on dbtype variable, for example php-mysql.

sudo apt-get install php-mysql

If you have memory caching (optional), you will need to install the appropriate module based on the memcache.local variable for that as well, such as php-apcu.

sudo apt-get install php-apcu

Once complete, restart Apache or Nginx.

Missing mysqli PHP extension after upgrading Ubuntu from 22.04 to 24.04

After upgrading a server from Ubuntu 22.04 (Jammy Jellyfish) to 24.04 (Noble Numbat), the Apache installation no longer functioned properly.  WordPress was also installed and would give the error:

Your PHP installation appears to be missing the MySQL extension which is required by WordPress.
Please check that the mysqli PHP extension is installed and enabled.

Also phpMyAdmin would give the following error

The mysqli extension is missing. Please check your PHP configuration.

This was confusing because the php-mysql plugin was already installed on the system.  Many guides walk you through manually uncommenting the lines for the mysqli and pdo_mysql extensions in php.ini.  Doing that had no effect.

Some guides will also have you create a PHP Info page to confirm whether the MySQL extension is enabled.  In the server’s webroot location, create a new php file with the following code:

<?php
phpinfo();
?>

Save the file , then browse to that webpage using a web browser.  This offered the first clue. At the top it the titled listed PHP Version 8.2 but Ubuntu 24.04 is by default configured for PHP 8.3.

Digging into this further, the server had multiple versions of php installed. You can check this by running the following command:

sudo update-alternatives --config php

In this case it was confirmed that 8.3 was set as the system default, but the title of the PHP Info webpage confirmed Apache was configured for the older 8.2 version.

You can check which version of PHP is configured for Apache by using the following command to list all of the modules currently loaded:

ls /etc/apache2/mods-enabled/

It was found that the server had a file named php8.2.load in the list.  The following commands will change the version of PHP module that Apache used from 8.2 to 8.3:

sudo a2dismod php8.2
sudo a2enmod php8.3
sudo service apache2 restart

After that was complete, both WordPress and phpMyAdmin started working again.

If you are using Nginx with PHP-FPM, then you will need to edit the config file for each enabled site to point to the newer version. The site configuration files are stored at:

/etc/nginx/sites-enabled

For example, change any lines containing the old version /var/run/php/php8.2-fpm.sock to point to the new version/var/run/php/php8.3-fpm.sock. Then restart Nginx:

sudo service nginx restart

 

 

Veeam Agent: Failed to process method NasMaster: The file specified already exists.

Veeam has the ability to backup File Shares and in this case I had an NFS backup that randomly started failing.  The only thing that had changed was a recent upgrade from Veeam 11 (specifically 11.0.1.1261) to Veeam 12 (specifically 12.3.1.1139).   The upgrade went smoothly with no errors generated during the installation process.  However about a month later, I received this error during the execution of a scheduled backup job:

Agent: Failed to process method {NasMaster.ExecuteBackupProcessor}: The file specified already exists.

The details of the error included:

Path: [Host: [xxx.xxx.xxx.xxx], Mount: [/path/to/backup], Disk: [Name/name/data/data/xxx.vblob], Type: [nfs3 (1)]]. The file specified already exists. NFS status code: 17 Failed to create nfs file stream. Failed to create nfs file. Failed to perform file backup Error: Agent: Failed to process method {NasMaster.ExecuteBackupProcessor}: The file specified already exists. Error: Agent: Failed to process method {NasMaster.ExecuteBackupProcessor}: The file specified already exists.

The automatic retries failed in the same manner.  At first I thought perhaps it was just a fluke, so I ignored it.  However, on the next scheduled backup the same error occurred.

The first thing I tried was to do a health check and repair on the backup using these two steps:

  1. Upgrade Veeam to the latest version (specifically 12.3.2.3617)
  2. Manually run a health check on the backup files

To perform a health check:

  1. Open Veeam Backup and Replication
  2. Go to Jobs -> Backup
  3. Find the job that was failing
  4. Right click on it -> Run health check

When I ran the health check, some errors were found and repairs were made to the backup:

Succeeded Repairing backup completed successfully

After it was complete, the job was manually run and it completed without any error.

I thought the issue was resolved, but it occur again on the next backup with the same error. However this time even though it failed, files would still be backed up. Also the job would automatically retry and the retry job would be successful. The details of the retry job showed that it processed no files, but completed without any error.

This pattern kept repeating.

I tried increasing the CPU and memory of the machine to check if it was starved of resources, however that also didn’t fix the problem.

I then tried deleting the backup on disk. The steps I followed to delete the backup from the disk:

  1. Open Veeam Backup and Replication
  2. Go to Backups -> Disk
  3. Find the backup related to the job that was failing
  4. Right click on it -> Delete from disk

This seemed to work initially, however the errors started showing up again after multiple cycles.

The final solution I found as a clue in this thread

https://www.reddit.com/r/Veeam/comments/vagwao/veeam_errors_failed_to_create_backup_file_because/

One key topic seemed to be around permissions of the ProgramData folder associated with the backup:

C:\ProgramData\Veeam\BACKUP folder

however, nothing looked unusual about the permissions at this location for this configuration.

What finally worked for me was to perform the following:

  1. Shut down Veeam and all related background services
  2. Completely delete the folder at the C:\ProgramData\Veeam\ location that is associated with the job.
  3. Reboot the system

After rebooting the system, it may take a couple backup cycles to completely flush out the failures. After that the errors should not return.

Invalid VMs after restoring ESXi configuration with a new boot drive

In this situation, the boot drive of ESXi was to be replaced. The existing ESXi installation was version 7.0 U3 and the boot drive also had a local datastore.  First a backup of the ESXi configuration was made and all VMs on the local datastore were moved to temporary storage.  A fresh install of ESXi was completed on a new drive, the VMs were moved back to the local datastore and then the ESXi configuration was restored.  However when the host booted up, all of the VMs registered on the host from the local datastore showed as Invalid.

The issue is that the inventory was pointing to the unique identifier (UUID) of the old datastore which you can see in the path visible in vSphere Client /vmfs/volumes/xxxxxxxx-xxxxxxxx-xxxx-xxxxxxxxxxxx/<VM path>.   The fix is to simply update the inventory for each registered VM with the UUID of the new datastore.

  1. Access the command line of the host
  2. Determine the UUID of the new datastore using the command:
    esxcli storage filesystem list
  3. Edit the file /etc/vmware/hostd/vmInventory.xml using vi:
    vi /etc/vmware/hostd/vmInventory.xml
  4. Modify the lines with  <vmxCfgPath>  that contain the UUID of the old datastore which is in an 8-8-4-12 digit arrangement: xxxxxxxx-xxxxxxxx-xxxx-xxxxxxxxxxxx leaving everything else the same.
  5. Save and close the file
  6. Restart the management services
    /etc/init.d/hostd restart
    
    /etc/init.d/vpxa restart
  7. Refresh the vSphere Client
  8. You should now see all VMs with the status as Normal.
  9. Update the swap file location of each VM by right clicking -> Edit Settings -> VM Options -> Advanced -> Edit Configuration
  10. Look for the configuration key sched.swap.derivedName and update the UUID
  11. Save the configuration and try to start the VM

No VMFS datastore present after clean install of ESXi 7

After installing ESXi 7 (specifically Update 3f, Build 20036589) on a 128GB boot device, I found that a datastore had not been created.  When I tried to create a VMFS datastore I found that it was not possible because a very large VMFSL partition had consumed all remaining space.

After some research I found that this version of ESXi creates this new kind of system partition that can consume up to 128GB of space by default!

https://techdocs.broadcom.com/us/en/vmware-cis/vsphere/vsphere/7-0/esxi-installation-and-setup-7-0/installing-and-setting-up-esxi/esxi-requirements/esxi-system-storage-overview.html

https://knowledge.broadcom.com/external/article/345195/boot-option-to-configure-the-size-of-esx.html

Based on this documentation, if you are using a boot device of 128GB or less then you will run into this problem of having no datastore on the boot drive after a clean install!

The solution is to reinstall ESXi and limit the size of the system partition by passing a boot option to the installer during the boot up process.   In the Broadcom documentation, there are some recommended minimums for the system partition that should be met based on your hardware configuration (especially the amount of RAM).

Once you have determined which size you want to target, the method required depends on which build of ESXi you have on the installer.

If you are using an ESXi 7 version prior to Update 1c (build 17325551), then you will need to use:

autoPartitionOSDataSize=XXX where XXX is the number in MB. (e.g., 32768 would result in 32GB).

If you are using ESXi 7 Update 1c or later, you can use the first approach above or the more simplified approach below:

systemMediaSize=XXX where XXX is either min, small, default or max corresponding to 32GB, 64GB, 128GB or all remaining GB, respectively.

The first step is to completely wipe the OS drive using a 3rd party tool because the ESXi installer will not delete any existing VMFSL partitions.

Once the OS drive is completely free of partitions, reboot with the ESXi installer media.  Wait for it to prompt you for Shift+O to modify the boot loader, then press that key combination.

At that point you can enter at the command line one of the two options listed above to limit the size of the system partition by adding it to the end.

For example:
runweasel cdromBoot systemMediaSize=min

Then press Enter.

Run through the installation as usual but this time you should end up with a datastore automatically created once complete!

Notes:

  1. The boot loader options are case sensitive

Selecting a CPU test in Ultimate Boot CD instantly reboots the system

I had a PC system based on an ASRock Z97 chipset motherboard with a Haswell generation i7 processor and integrated graphics.  Before considering the system stable, I was attempting to do a stress test using the latest version of UBCD (version V5.3.9 at the time of this writing).  For these tests, I used Rufus to burn the ISO to a USB disk.

As expected, when booting from the thumbdrive, I had to select the non-UEFI version, otherwise the system would kick back to the boot menu.

Once loaded into UBCD, I found that Prime95 as well as choosing many other options under the CPU category would fail to load. Upon selecting an option, the system would immediately black screen and then automatically restart.   However the Memory -> Memtest86 option worked fine and was completely stable.

I tried many things to resolve the issue such as resetting all BIOS options to default, a different thumb drive, different processor (i5, another Devil’s Canyon Haswell) and a different power supply.  None of these fixed it.

As another test I booted the system into Windows 10 and it looped Prime95 without a problem, so I kept digging.

During my research, I stumbled upon a support thread on the UBCD forums, specifically this post:

https://www.ultimatebootcd.com/forums/viewtopic.php?p=11119#p11119

All of the tests that were failing were due to depending on CPUstress, which is launched by syslinux. This includes:

  • CPUburn
  • CPU Burn-in
  • CPUinfo
  • Intel Optimized LINPACK Benchmark
  • Mersenne Prime Test (prime95)
  • Stress
  • StressCPU
  • System Stability Tester

On the otherhand, the options that were functional included:

  • CPUID
  • x86test
  • Intel Processor Frequency ID
  • Intel Process Identification Utility

A workaround for the failing tests is to first select the desired test, press tab key and then change the first part of the commandt from:

/ubcd/boot/cpustress/bzImage

to:

/pmagic/bzImage

Then press enter. This managed to get the CPU test loaded (in this case Mersenne Prime test), however the USB keyboard stopped working and I couldn’t answer the prompts to begin the test.   I immediately plugged in a PS/2 keyboard and was able to get it working that way.

Another workaround I discovered was installing a dedicated video card. With the GPU installed, everything worked as expected without modifying the boot loader.

However if you don’t have a discrete GPU available, the final fix I found a workaround in the BIOS.

My motherboard had an option to set the amount of memory to dedicate to the integrated graphics processor built into the CPU (Advanced -> Chipset Configuration -> Share Memory).  By default this option is set to Auto.  The fix is to set it to a specific value that is between 32MB and 128MB. Setting it to 256MB would fail in the same way as Auto.

Once that is set, reboot, boot up UBCD and try the CPU test again!