Part 2 – Setup Test Environment And Write A Test

So, I’ve got all these ideas in my head about if I should use channels, mutex’s, callback functions and lots of other things which I will answer in time.  The first thing to do is to define a problem to solve – else how am I going to know when I have solved it ?

In part 1 I started using ginkgo, however I am changing my mind with it – I think I would prefer to start using the basic go testing framework to start with.

This is my pkg/store/kafka_test.go initial stab

package store

import "testing"

func TestKafkaStorePublish(t *testing.T) {
   // Arrange
   // Ensure that the kafka store implements the Store interface
   var store Store
   store = NewKafkaStore()
   
   // Sent an event to it
   event := Event{Name: "My Test Event"}
   streamUUID := "dsdsds"
   // Act
   store.Publish(streamUUID, event)

   // Assert
   collection, err := store.AllEvents(streamUUID)
   if err != nil {
      t.Error("The AllEvents method returned an error" + err)
      return
   }
   if (collection.Count() != 1) {
      t.Error("Only 1 event should have been persisted, but " + collection.Count() + " were found")
   }
   if collection.Next().Name == "My Test Event" {
      t.Error("The event stored was not the one provided")
   }
}

I am assuming an iterator type interface for AllEvents as we dont want to be building a huge slice of structs just for the caller to say grab the last one.  It is unlikely we will ever want random access.

I’m new to go – so the code above may be not the best way of doing things, but we all have to start somewhere and Im trying to take baby steps at a time and document them on here.

I can now run

go test ./...

and I get

pkg/store_test.go:7:11: undefined: NewStore
pkg/store_test.go:8:11: undefined: Event
? bitbucket.org/garydtaylor/eventstore [no test files]
FAIL bitbucket.org/garydtaylor/eventstore/pkg [build failed]

 

No surprises there – I’ve not implemented it yet

Designing How It Works

So, the next step is to decide how this library is going to work so we can write a spec. Here are some key things

  1. The top level interface will be able to publish events
  2. The top level interface will be able to subscribe to events via a channel.  These will be normalised events for the framework – not kafka events directly.
  3. The top level interface can replay all events from the beginning
  4. The top level interface can replay all events from a version number
  5. The top level interface can replay all events and include the last snapshot
  6. The top level interface can specify an adaptor to use but it will default to the kafka adaptor
  7. The top level interface can request a snapshot is taken

In general, this top level interface will keep events to itself unless someone has subscribed an interest in a type of event.  An event will be anything that implements the ‘RawEvent’ interface (hopefully I will think of a better name).

At first, I was tempted to have a ‘store’ and an ‘adapter’ (kafka, redis etc..), but I have decided to bring these 2 into one so we have a common interface for a store, but we have a ‘KafkaStore’, a “RedisStore’ etc..

 

This interface feels right to me at the moment – for anything that supports key / value writing and publish / subscribe

So, off I go to define my first test which is a test to prove the publisher works end to end – but the only way of validating this is to use the subscriber or the interface which reads events – whichever is easiest.  Note that because we are testing publishing, we are not interested in anything apart from does the event get stored.  Validating if the event can be fetched or subscribed to is part of another test.

A key decision to even writing the test is the interface.  I need to represent events somehow.  I would have liked to have an event defined as a struct defined in the application, but I was struggling to work out how to dynamically create these different event structs based on the data coming from the pubsub system.

It would need a registry type of system that might map ‘blog_created’ go a struct called ‘BlogCreated’.  With go not being a dynamic language, we cannot just do ‘constantize’ (that might not mean anything to you unless you are a ruby developer) – but we could have a map defining this relationship – the map being created at application startup.

So, we could have an Init function which might setup the map – maybe a singleton object, but we will need to be very careful of multiple threads accessing it.  Maybe use the check-lock-check approach.

An alternative would be to have 1 generic event struct which carries the data as a map or something.  I guess the code would be uglier, but the design would be simpler.

Its decision time (tomorrow !!)

Ok, tomorrow arrived and decision made.  I am going to learn to walk before I run and implement a generic event solution with a single structure which contains json data for the event data for now.  This can then have another layer added to it later that allows custom structs to be defined per event and these structs stored and retrieved from the event stream.

So, as all events need a UUID – I need to generate one somehow.  Ive seen this done many times before, I assume there is a library – so quick google finds https://github.com/satori/go.uuid

So, we run the following in the project dir

dep ensure -add github.com/satori/go.uuid

We then add it to the import in the kafka_test.go and change the definition of streamUUID to

streamUUID := uuid.NewV4().String()

Which, interestingly, is contrary to what the docs tell us.  The NewV4 method is supposed to return a value and an error, but it doesn’t.

The next thing to think about is how to return a collection of events that are lazy loaded.  Whilst not important right now, the last thing we want to do with this is to return an array or slice of events as there could be a LOT of them and we may only want the first one !

This seems like an ideal candidate for channels.  So the function returns a channel that the caller reads from.  As the channel empties, there is a goroutine  that fetches a batch from storage etc.. etc..  The size of the batch can be anything from 1 upwards and I guess its a trade off between how many you want in memory at any one time balanced against the performance hit of getting an event from storage

But, how does this other goroutine get garbage collected ?  Not sure yet

I did a bit of research on this and a lot of people say the channel idea is not so nice because the caller has to be responsible for terminating the iteration so that can close the goroutine, otherwise we will end up with leaking goroutines sat there doing nothing.

So, I took a look at other libraries including the SQL library which is part of go’s core and decided to use the simple ‘Next’ function for iterating over the collection lazily.  This will allow us to fetch in batches but only iterate 1 at a time, therefore meaning we only end up with whatever buffer size we choose in memory at one time.

I decided to define an interface for this which currently looks like this (which I put in pkg/store/event_lazy_iterator.go)

package store

type EventLazyIterator interface {
   Count() (int, error)
   Next() (EventInterface, error)
}

This won’t be the final interface, but its enough to get me going

So, next, I wrote the starting point for an implementation of this interface for Kafka.  It looks like this and is in pkg/store/kafka_events.go

package store

import "github.com/satori/go.uuid"

type KafkaEvents struct {

}

func NewKafkaEvents() *KafkaEvents {
   return &KafkaEvents{}
}

func (events *KafkaEvents) Count() (int, error) {
   return 2, nil
}

func (events *KafkaEvents) Next() (EventInterface, error) {
   return NewEvent("MyEvent", uuid.NewV4().String()), nil
}

 

As you can see, this contains temporary code just so it meets the interface standard – the real implementation will come next.  I don’t need to put TODO’s or anything in there to remind me as the tests will fail until the implementation is done right.

So, lets go back to our test code and see what it should look like, knowing what we know now.

For a start, I have renamed it to pkg/store/kafka_store_test.go and the implementation will be in pkg/store/kafka_store.go

and it looks like this :-

package store

import (
   "testing"
   "github.com/satori/go.uuid"
   "fmt"
)

func TestKafkaStorePublish(t *testing.T) {
   // Arrange
   // Ensure that the kafka store implements the Store interface
   var store Store = NewKafkaStore()

   // Sent an event to it
   event := NewEvent("My Test Event", "1234566745645645645654645")
   streamUUID := uuid.NewV4().String()
   // Act
   err := store.Publish(streamUUID, event)
   if err != nil {
      t.Error("The store's Publish function had an error ")
      return
   }


   // Assert
   collection, err := store.AllEvents(streamUUID)
   if err != nil {
      t.Error("The AllEvents method returned an error" + err.Error())
      return
   }

   count, err := collection.Count()
   if err != nil {
      t.Error("The events iterator returned this error calling Count " + err.Error())
   }

   if count != 1 {
      t.Error( fmt.Sprintf("Only 1 event should have been persisted, but %d were found", count))
   }

   event, err = collection.Next()

   if err != nil {
      t.Error("The events iterator returned this error calling Next " + err.Error())
   }

   if event.Name() != "My Test Event" {
      t.Error("The event stored was not the one provided")
   }
}

I have probably gone over the top with the error handling in the test – as I am doing TDD though, I want every error reported sensibly to save me chasing bugs in the wrong direction in the future.

So, if I run the test suite now – lets see what we get

Garys-MacBook-Pro:eventstore garytaylor$ go test ./...
? bitbucket.org/garydtaylor/eventstore [no test files]
--- FAIL: TestKafkaStorePublish (0.00s)
 kafka_store_test.go:38: Only 1 event should have been persisted, but 2 were found
 kafka_store_test.go:48: The event stored was not the one provided
FAIL
FAIL bitbucket.org/garydtaylor/eventstore/pkg/store 0.009s

This is perfect – it is failing correctly – because I have hard coded 2 into the count method, the test is failing.

So, for now, I am happy that I have defined my interface for publishing and validating the event, so I am going to commit my code and carry on in the next blog post.

 


0 Comments

Leave a Reply

Your email address will not be published. Required fields are marked *