All Systems Operational
API ? Operational
Dashboard ? Operational
Acceleration ? Operational
90 days ago
99.83 % uptime
Today
Website ? Operational
Collect ? Operational
Logs delivery ? Operational
90 days ago
99.63 % uptime
Today
Operational
Degraded Performance
Partial Outage
Major Outage
Maintenance
Major outage
Partial outage
No downtime recorded on this day.
had a major outage
had a partial outage
Global optimization engine overhead
Fetching
Fasterize DC1 availability
Fetching
Fasterize euwest1 availability
Fetching
Past Incidents
Jan 28, 2020
Resolved - KeyCDN has taken a problematic edge server out of production for further analysis.
We don't see any 502 errors since 15h05.
We replugged unplugged websites on KeyCDN.
Jan 28, 15:45 CET
Monitoring - We unplugged KeyCDN while waiting for a complete resolution.
Jan 28, 15:04 CET
Investigating - We are currently investigating errors 502 emitted by KeyCDN.
We already ask to their support more details.
The new platform AWS euwest1 is not impacted by this incident.
Jan 28, 14:43 CET
Completed - The scheduled maintenance has been completed.
Jan 28, 00:00 CET
In progress - Scheduled maintenance is currently in progress. We will provide updates as necessary.
Jan 27, 22:00 CET
Scheduled - For a best appreciation of deployment impacts, we use a scale from 0 to 3 :
0 : not critical (no impact is expected after the deployment of the feature or fix, it's often related to the Fasterize website, API or auxiliary tools)
1 : minor (it may have a minor impact on optimized websites operations during the deployment, it's often related to the optimization engine or the cache layer)
2 : major (it may have a major impact on optimized websites. It is advisable to monitor the websites during this deployment).

Fix
[0] [infra] Improve proxy availability when rotating log files
[0] [infra] Add metadata type do preload headers when storing
[0] [engine] Removing a config V2 correctly load the config V1
[0] [engine] Add possibility to choose the config (V1 or V2) to use.
Jan 21, 19:57 CET
Jan 26, 2020

No incidents reported.

Jan 25, 2020

No incidents reported.

Jan 24, 2020

No incidents reported.

Jan 23, 2020

No incidents reported.

Jan 22, 2020
Postmortem - Read details
Jan 23, 13:11 CET
Resolved - This incident is now resolved.
From AWS :
"10:25 AM PST We are investigating an issue which is affecting internet connectivity to a single availability zone in EU-WEST-3 Region.
11:05 AM PST We have identified the root cause of the issue that is affecting connectivity to a single availability zone in EU-WEST-3 Region and continue to work towards resolution.
11:45 AM PST Between 10:00 AM and 11:28 AM PST we experienced an issue affecting network connectivity to AWS services in a single Availability Zone in EU-WEST-3 Region. The issue has been resolved and connectivity has been restored."
Jan 22, 21:46 CET
Monitoring - AWS is working on the DNS resolution issue. The platform is behaving normally since the DNS resolver change.
Jan 22, 20:08 CET
Investigating - We detected an issue impacting the DNS resolution on some machines. We already fixed the issue by changing the DNS resolver. We are investigating why the DNS resolver is not working.
Jan 22, 19:49 CET
Jan 21, 2020

No incidents reported.

Jan 20, 2020

No incidents reported.

Jan 19, 2020

No incidents reported.

Jan 18, 2020

No incidents reported.

Jan 17, 2020

No incidents reported.

Jan 16, 2020

No incidents reported.

Jan 15, 2020
Postmortem - Read details
Jan 15, 14:48 CET
Resolved - Closing this incident as there have been no other attacks.
Jan 15, 10:01 CET
Update - Our post-mortem will be published as soon as possible.
Jan 14, 21:03 CET
Monitoring - The traffic is now routed to Fasterize. We're actively monitoring.
Jan 14, 20:29 CET
Identified - The issue has been identified and a fix is being implemented.
Jan 14, 20:25 CET
Update - Our platform is under attack. We're mitigating and we hope for a recover very soon.
Jan 14, 20:22 CET
Update - We are continuing to investigate the issue. The traffic has been routed to the origin from 18:23.
The investigation is focused on abnormal CPU consumption on the proxies layer.
Jan 14, 19:45 CET
Investigating - We are currently investigating error 502 emitted by Fasterize since 18h09.
Jan 14, 18:29 CET