A stable and performant network is essential to a successful DebConf.
While we did experience some hardware issues, we were always able to work around them and improve the network from a fairly good start to a level which many have claimed was the best at any DebConf ever.
One sponsored server and several disks were dead on arrival, requiring a reshuffling of planned services among the available machines.
Upstream was offered on a RJ45 port and not single-mode 1310 as previously communicated. As our sponsored ASR1002-X refused to work with both Cisco and third-party RJ45 SFP modules, we were forced to use a Cisco WS-C2960S as media converter
Initial build-up was quick and took about 1.5 days in total, even though we had to work around hardware sponsorship being different from what we expected. The switches with PoE support had X2 slots but no adapters or pluggables, whereas the Cisco WS-C2960S switches without PoE support had SFP ports built-in. As such, we needed to use the WS-C2960S as media converters to connect the PoE ones. The only multi-mode pluggables we had on site were 10G SFP+ ones, so we were forced to use 80km SFP 1G pluggables to connect through the multi-mode pre-cabling on site. This worked surprisingly stable.
All subnets were routed from our main upstream router to a single Debian machine to ease internal routing and fire-walling of services.
In total, we had four sites with structured Cat5 pre-cabling to the AP locations. The sites were interconnected in a daisy-chain with OM2 multi-mode.
DebCamp was relatively quiet and without major incidents. During the week, we installed an external AC as the closet acting as a server room was overheating. Friday saw an upgrade of all back-bone links up to 10G with multi-mode optics after we received several X2 to SFP+ and X2 to 2 * SFP adapters. The last leg had multi-mode cabling which was incapable of 10G, so we used two 80km SFP 1G in a port-channel instead.
August 16th at around 0900 and 2200 the main server used for almost all services and for all routing died twice in a row. Moving all daemons to another server and all routing back to the ASR, thus restoring full services, took us until approximately 0330 on Monday morning.
August 20th saw a complete upstream outage because the DC hosting our upstream had power loss and the UPS and diesel were misconfigured, triggering a fuse when activated.
 Lessons for next time
- All hardware sponsorship should be finalized and reconfirmed well in advance
- Where possible, hardware should be tested before it's deployed
- Pre-configured VMs or similar should be created to save time during build-up
- If possible, a full copy of the Debian FTP archives should be brought on site to reduce the time needed to rsync the mirror data