The other day, I was watching one of the Test Driven Development (TDD) episodes in the Clean Coder video series by Robert C. Martin. In the episode, Robert Martin (aka Uncle Bob) was using the Red-Green-Refactor approach to refactoring some HTML formatter class. As he was walking through the demonstration, something completely tangential struck me - his HTML formatter class had "state." The ease with which he invoked internal methods of this object gave me pause - I think "state" is something that I have been overlooking for a long time.
Historically, when I've created a "utility" class, I've done so with the intention of using it as a singleton that has an exposed method. This requires the class to be stateless, being used over and over again by many requests in a given application. As a side effect of this approach, any state required internally would have to be passed around with every internal method call.
This creates long, private method signatures that obfuscate the meaning of the code. If I were to, instead, treat the singleton as a "newable" entity, to be instantiated as needed, I could maintain useful state information that would allow for the coding of smarter private methods.
To put this in practical terms, let's look at a service object that I created for an AngularJS application. A while back, I blogged about how AngularJS maps the view model ($scope) onto DOM (Document Object Model) elements using the $$hashKey property. To integrate cached data into this rendering approach, I created a service object that would copy the $$hashKey values from one data structure (the cached one) into another data structure (the live one).
This service object lived for the duration of the web app and exposed one method:
hashKeyCopyService.copyHashKeys( source, destination ) :: destination
Internally, the copyHashKeys() method builds up an index of all $$hashKey values contained within the source object; then, it applies this index to the destination object. And, since this service is a singleton, all of this data has to be passed around with every method call.
Now, what if I didn't use a Singleton? What if, instead, I instantiated this "service entity" every time that it was needed. From the outside, things would look fairly similar:
new HashKeyCopier( source, destination ).copyHashKeys() :: destination
... but, from the inside, things would be significantly more straightforward. Rather than passing around the state with every internal method call, state could be centralized; then, each private method could write to and read from that state information as necessary.
I think that I need to be thinking about state much more often. I think it will force me to create smaller, more cohesive components that have a focused responsibility. Perhaps this is the moment - the moment where I can look back and see a turning point in the way I think about and build software?
Looking For A New Job?
- Mid-Level Developer - Remote at Meeting Play
- Cold Fusion Developer/Designer at BPO Elks of the USA
- 10 year + CF lead Programmer/Developer with expert dot net/sql skills at Atprime Media Services
- ColdFusion Developer (advanced) at Intoria Internet Architects
- Full-time, remote CF Developer for Motorsport SaaS Company at MotorsportReg.com
Interesting pov ...
The first 'flag' that comes to mind is performance ... would it not be more advantageous to have copyHashKeys as a static method rather than having to create a new HashKeyCopier object for each hashkey copy?
just sayin' ... :-)
IMO, when dealing with libraries and APIs (not ordinary business logic), having internal state is only important for fluid interfaces (aka chainable methods), structures that follow the Builder pattern and for storing details for underlying systems (paths, URIs, usernames+passwords, db names, etc).
Performance is certainly a concern, but in languages with references or runtimes with copy-on-write mechanisms, passing around state via parameters is a matter of some extra bytes of stack per invocation.
However, abstracting algorithms of state allows for more modular routines. Having a shared internal state means that routines must have side effects in order to acomplish something. Applying a more functional approach will ultimately produce less error prone, more focused code.
I wish I could give you some concrete example, but my daughter is demanding attention. Functional programming folks can guide you better with this topic.
I think the performance problem would be negligible. At least, in my particular use-case. The $$hashKeys are only copied once per interface. That said, something that I hear time and time again from people is that easier to use code typically outweighs performant code ... until performance becomes a problem.
Also, you have to understand a little bit of my programming background. In my applications, I tend to create massive "Service" objects that are thousands of lines long and are basically a huge collection of "static" methods that do everything you need for some area of the app.
These beasts are definitely more error prone than anything I would create smaller and more cohesive, whether it was OOP for FP.
I think what I find fascinating about stateful utilities is that they would naturally create a barrier against the improper growing of a given component. Meaning, I can't just randomly add methods to an object because it loosely relates to the object - since I have to interact with state, the low-cohesion will become much more apparent much earlier on.
So, I think the statefulness would help enforce:
* High cohesion.
* Single responsibility.
* Small method signatures.
* Small classes.
That said, I am not advocating moving ALL objects to statefulness. I am only going to try and think about state more often than I do now (which is almost never).
I think it really depends on the use case on whether you use your singleton/stateless pattern works or whether it makes more sense to use a traditional transient object.
If the code is being executed at in high iterations, then I think the stateless singleton pattern makes a ton of sense, just because the cost of initializing an object can be high.
If you're expecting the code to be run potentially hundreds or thousands of times in a request, then saving the initialization cost can be huge.
If however, the code is something that's run/initialized only a handful of times, then the initializing overhead is minimal.
That said you could always built a stateful object that's reusable (i.e. you initialize the object once, but have the ability to "reset" the state of the object w/new values.) That removes the initialization cost and allows you to track state for that instance.
Re: The $$hashKeys ...
This is a great topic which I'm hardly qualified to address with any merit of authority.
With your use-case and in this context you are probably correct though ... lest you were writing a public-facing API the performance issues would be a matter of preference / coding style ...
I feel like Dan kinda' nailed it though - the performance hit for high availability objects may be a bit much. There are cases where you want big classes with static methods and still follow SRP ...
Take Math for example ... it would kinda' suck if you had to create a Math object to add two integers.
Yeah, certainly context is going to be a large influence on which way you go. I guess I was just excited about this thought-path because I have, to date, been using stateless singletons by default for almost everything... ever :)
Also, there's something to be said about have a "static" method that creates NEW objects internally. This way, you can present a simple signature to the calling scope; but, choose to create new, utility objects in a fully encapsulated way.
Re: static methods ...
Maybe I'm misunderstanding what you mean ... as static methods don't create object instances.
Is your reference to a static 'factory' method that creates objects?
Ah, sorry, having a bit of "hump day" brain activity :) You are right. I meant that the methods were on Singleton objects, not that they were "static." Thanks for catching that!
NP chief ...
OO related concepts can be quite tricky to explain ...
Like you, I have recently started experimenting with this myself. I also been creating a huge service classes using singletons until one day it hit me. "I am not really doing OO programming, I am really just defining class full of functions." I hit this problem while trying to reuse funtionality. Because my service has no state, I was having all kind of issues reusing the functionality. The solution I came up with was to leave my service a singleton. I then defined a transient class with the data and the behavior I needed to re-use. I then modified my service so that instead of doing the work itself it simple instantiated my new class, and return a method call. I was then able to effortlessly re-use the same code in the two other places I needed it. Since then I have been using trancient objects more often not really noticing any degradation in performance. In on instance because I knew the date did not change often, I was able to cache the returned data so that I didn't always have to instansiate.