A few days ago, I decided to make a Pull Request of a feature I wrote myself for use at my company, a Rate-Limiting Middleware. Unlike other conventional middlewares in the package, rate-limiting is a really tricky one to write tests for because it requires a strictly time-based testing suite in order to test that it works as intended. Didn't seem like much to me at first. I wrote the tests and hit a 100% coverage and they all passed on my computer.
Opened a PR at Echo and noticed that on some environments, tests passed, while they failed on others. After trying to make sense of it I noticed that there was no pattern, the failure was totally random and unpredictable. I studied the source code and tried to pin the fault on a particular unit but every thing seemed flawless. That was when I tried to study the only variable factor, time. Since my tests depended on the clock of the host computer and the system ticks are subject to lags when the CPU undergoes strain, my tests will most likely fail on systems that have lower CPU power because of lags during each tick.
Here is a link to the Updated PR incase you want to see what is going on yourself:
adds middleware for rate limiting #1724
What's New?
This feature implements a store agnostic rate limiting middleware i.e configurable with any store of user's choice.
Here is a dummy snippet of how a certain (unconventional) redis store might be integrated with the rate limiting middleware
type RedisStore struct {
client redis.Client
}
// Store config must implement Allow for rate limiting middleware
func (store *RedisStore) Allow(identifier) bool {
// run logic here that decides if user should be permitted
return true
}
func main(){
e := echo.New()
redisStore := RedisStore{
client: redis.Client{}
}
limiterMW := middleware.RateLimiter(redisStore)
e.Use(limiterMW)
}
I threw in an InMemory implementation for people like me who want to get on the go fast.
func main(){
e := echo.New()
var inMemoryStore = middleware.RateLimiterMemoryStore{
rate: 1,
burst: 3,
}
limiterMW := middleware.RateLimiterWithConfig(RateLimiterConfig{
Store: &inMemoryStore,
SourceFunc: func(ctx echo.Context) string {
return ctx.RealIP()
},
})
e.Use(limiterMW)
}
closes #1721
So after I established that the system ticker cannot be trusted to provide a consistent tick rate enough to verify that my rate limiter works as intended, I set off to trick my tests by finding a stable time model that does not depend on the system ticker. The rest of this blog will cover the specifics.
Firstly, you'd need to wrap all usages of standard time helpers with custom functions that resolve to them. In my case the only main helper I needed to wrap was time.Now
.
var now func() time.Time
now = func() time.Time {
return time.Now()
}
Now in your source code, replace all occurrences of the now wrapped helpers with your custom helpers. The goal of this is to be able to use the original helpers in the main source code but easily mock them in the corresponding tests.
func (store *RateLimiterMemoryStore) Allow(identifier string) bool {
store.mutex.Lock()
limiter, exists := store.visitors[identifier]
if !exists {
limiter = new(Visitor)
limiter.Limiter = rate.NewLimiter(store.rate, store.burst)
limiter.lastSeen = now() // instead of time.Now()
store.visitors[identifier] = limiter
}
limiter.lastSeen = now() // instead of time.Now()
store.mutex.Unlock()
if now().Sub(store.lastCleanup) > store.expiresIn {
store.cleanupStaleVisitors()
}
return limiter.AllowN(now() /* instead of time.Now() */, 1)
}
Now in the context of our tests we have to change what now()
returns from the standard time to our custom computed time:
func TestRateLimiterMemoryStore_Allow(t *testing.T) {
var inMemoryStore = NewRateLimiterMemoryStore(RateLimiterMemoryStoreConfig{rate: 1, burst: 3, expiresIn: 2 * time.Second})
testCases := []struct {
id string
allowed bool
}{
{"127.0.0.1", true}, // 0 ms
{"127.0.0.1", true}, // 220 ms burst #2
{"127.0.0.1", true}, // 440 ms burst #3
{"127.0.0.1", false}, // 660 ms block
{"127.0.0.1", false}, // 880 ms block
{"127.0.0.1", true}, // 1100 ms next second #1
{"127.0.0.2", true}, // 1320 ms allow other ip
{"127.0.0.1", false}, // 1540 ms no burst
{"127.0.0.1", false}, // 1760 ms no burst
{"127.0.0.1", false}, // 1980 ms no burst
{"127.0.0.1", true}, // 2200 ms no burst
{"127.0.0.1", false}, // 2420 ms no burst
{"127.0.0.1", false}, // 2640 ms no burst
{"127.0.0.1", false}, // 2860 ms no burst
{"127.0.0.1", true}, // 3080 ms no burst
{"127.0.0.1", false}, // 3300 ms no burst
}
for i, tc := range testCases {
t.Logf("Running testcase #%d => %v", i, time.Duration(i)*220*time.Millisecond)
// Should have been a time.Sleep here but we manually increase the value of the date that the now function returns using the iterator from the for loop.
now = func() time.Time {
return time.Date(2009, time.November, 10, 23, 0, 0, 0, time.UTC).Add(time.Duration(i) * 220 * time.Millisecond)
}
allowed := inMemoryStore.Allow(tc.id)
assert.Equal(t, tc.allowed, allowed)
}
}
You'd notice I am not depending on time.Sleep
or the system ticker to simulate a change in time for my tests. I am flexibly computing the next time in the for loop based on the iteration count. This way tests don't have to literally wait for the system ticker to actually tick before working with it's value and since the time is computed, in reality the tests will not actually have to wait for the required number of real live seconds to complete. Brings speed back to your tests and CPU agnostic flexibility. Now you can write time-based tests that will pass regardless of the clock speed of the CPU.
Top comments (0)