2018-07-09

Do not use "downlink delay" on Cisco Nexus if vPC peer-keepalive is done through the access ports

The Cisco Nexus 3000 series switches with 1GE copper interfaces support the "downlink delay" feature, that looks really helpful in the first place, since it blocks traffic flow until the switch is connected to the core. But, you should be very careful when combining it with vPC if peer-keepalive is built either over the access copper ports or the downlink ones (a non-existent scenario, since usually you can't spare even one downlink port with the usual 4-port configuration), instead of default recommendation for mgmt0.
With downlink delay configured, the access ports come up with a specified delay (30s default), leading to peer-keepalive being down. When one of the switches comes down and then up, this leads the second (vPC peer) switch to believe that since peer-keepalive is down and peer-link is up, it should not become primary and, in fact should shut down all local vPCs. So, whenever you reload any of the vPC peers, all your vPCs are down on both switches for the downlink delay.
The solution is simple -- either disable downlink delay (we went this way and didn't encounter any problems we anticipated when enabling this setting in the first place), or use mgmt0 ports for vPC-keepalive.

2018-01-07

You might want to run your .NET Core ping tool with superuser rights on Linux

ICMP echo (ping) on .NET Core on Linux may be too slow if the .NET process is not running with superuser rights. On Windows there is an IcmpSendEcho2 function (from IP Helper library) that allows ICMP echo/reply even for non-superusers. On Linux, it requires working with raw sockets, and requires superuser rights. To work around this limitation, the .NET Core uses a trick to implement System.Net.NetworkInformation.Ping that runs the system ping tool (that can access raw sockets due to SUID bit being set), when superuser rights are not available, see here. But, if you're pinging many hosts and do it frequently (like once per 50ms), spawning a process for each operation might become too slow, putting extra load on the machine and skewing the measurement results. To make fast pings on Linux, you should run your process with superuser rights.
Also, take note that .NET Core currently implements the synchronous Ping API as wrappers around asynchronous methods, so there is next to no benefit going with the sync version for performance reasons (as was the case for .NET Framework).

2017-08-17

.NET Core 2.0 in AWS ElasticBeanstalk-managed environments

Update: This is no longer needed.

The .NET Core 2.0 is released now, but the AWS Elastic Beanstalk Windows AMIs do not yet support it (only 1.0/1.1 is supported), so if you're using AWS Toolkit for VS 2017 you can't (successfully) deploy .NET Core 2.0 projects.
While waiting for Amazon to update them (not sure if they are going to do it, especially since .NET Core 2.0 is not LTS for now), I devised a quick fix that requires minimal changes to the project and does not require any AWS interaction. Feel free to use it until AWS upgrades the subject AMIs.
If dotnet/core#848 is resolved, the easier way using "packages" key will be available.

Also, a small recommendation on AMI selection -- do not use Windows Server 2016 images for these kinds of deployments, you gain almost nothing, but the antimalware tool that is installed by default impacts performance too much. I recommend to use Windows Server 2012R2 Core AMI (ami-1bfa1a63 at the time of writing).

P.S. I know that Azure deployments are easier, but their "security measures" are developer-unfriendly.

2017-07-13

mod_gridfs is dead, long live gridfs_server

Long time no write, but mod_gridfs is no longer going to be developed, it's being superseded by gridfs_server. It's written in C#, based on ASP.NET Core, Kestrel, runs on .NET Core 2.0 Preview 2, and successfully serves about 25 million files per day from our MongoDB GridFS installation, as a backend to a front-end cache (that serves about 450 million files per day).

2015-12-06

ASP.NET browser capabilities caching gotcha

If you're doing ASP.NET (pre-vNext) development and use browser capabilities checking (e.g. Request.Browser), add the following to your config file:

<browserCaps userAgentCacheKeyLength="256" />

You'll thank me later.
In short, ASP.NET caches capabilities based on first N characters of a user agent string, where N is 64 by default. After a "mobile" Google bot visits your site (with user agent of "Mozilla/5.0 (iPhone; CPU iPhone OS 8_3 like Mac OS X) AppleWebKit/600.1.4 (KHTML, like Gecko) Version/8.0 Mobile/12F70 Safari/600.1.4 (compatible; Googlebot/2.1; +http://www.google.com/bot.html)") your iPhone users will be considered as crawlers, which is probably not what you want.

2014-10-30

mod_gridfs v0.4: tag sets

Short news: mod_gridfs is now v0.4, supporing read preference tag sets along with having some performance and robustness improvements. Addition of tag sets allows many interesting things, for example fair 1-to-1 load balancing between shard replicas.

2013-11-11

Backups to Amazon S3 -- simple and efficient

We use Amazon S3 as a part of our backup strategy -- all of our backup servers in datacenter replicate local backup images to S3 daily. While we have at least six physical copies of each backup (3 copies, each on different machine, all backup disks are in RAID1), having an offsite backup for DR even if we'll go multi-datacenter and will replicate data in semi-realtime is very important.
During the lifetime of this installation we used different approaches. The first one was using s3cmd -- while very functional and reliable it was very slow, because there was no real way to determine what changed between local and remote "filesystem" and just copying 100s of gigabytes per host per day was very slow. We thought that something like rsync would be much better, so we moved to s3fs+rsync. Unfortunately, it was very unstable and either required a second copy of files (to cache remote attributes) or was prone to downloading parts of files to compare them with originals, to determine if the file itself should be copied. We also evaluated duplicity (it consumed large amounts of temporary space) and a couple of commercial solutions, but none of them were good enough for us.
So, I decided to write a simple utility that would do this kind of a sync -- s3backup.
Features:
  • easy to use -- configure AWS credentials, specify source and destination, put it into your crontab and you're set
  • resource-efficient -- never downloads remote files, compares file sizes (good if your backups are named differently for each day) and/or MD5 checksums (good for other cases, consumes a bit more CPU), does not attempt to read entire file in RAM, etc.
  • works with large files -- currently, the upper limit is about 500GB (10,000 chunks, 50MB each), this can easily be increased up to 5TB if you don't need MD5 checksum comparison
  • supports recycling -- locally-removed files are removed from S3 only after they reach a specified age
You're welcome to try it out and use it. Feedback (especially about file sizes you manage and RAM constraints you have on backup machines) is very welcome also.