OVHcloud Bare Metal Cloud Status

Current status
Legend
  • Operational
  • Degraded performance
  • Partial Outage
  • Major Outage
  • Under maintenance
FS#12082 — vrack1.5
Incident Report for Bare Metal Cloud
Resolved
We have a problem with the Vrack between
RBX and 3 other datacenters : SBG, GRA and BHS.
We are investigating the cause of the problem.

The impacted customers have the Vrack between
the DC.

Update(s):

Date: 2014-11-23 07:09:24 UTC
After the first restart of udp process, everything is back in order.Then we looked for the network stream triggering the problem. After 2 hours the UDP process had already returned to 50% of its max memory usage but by analyzing the packets going back to the CPU, we could identify the flow issue accurately. So we could add the required filtering to prevent that this flow goes back to the router cpu and fix the problem permanently.

Date: 2014-11-22 14:47:35 UTC
The problem seems to be fixed.

We check.

Date: 2014-11-22 14:47:20 UTC
#run attach 0/RSP0/CPU0
sysmgr_control -A -r udp

to relaunch the process

Date: 2014-11-22 14:29:09 UTC

It is possible that we have the bug CSCug87873 .
There is a patch. We look at the application conditions
of the patch (reload or no reload)

Date: 2014-11-22 14:28:16 UTC
We are studying the problematics of change of the configuration
since this night to find the origin of this problem.
The TAC of Cisco is on this problem too.

Date: 2014-11-22 13:31:25 UTC
We are working with the equipments manufacturer
to find the origin of the problem.
The vrack inside RBX is still working. The vrack
in the other DC is still working and between the DC.
The problem is at the level of the equipments at Roubaix
which were isolated from the vrack network and continued
to work normally on the IPv4/IPv6. We are searching
the origin of the problem.
Posted Nov 22, 2014 - 11:13 UTC