Study for the Future

  • A question that often arises and very much surprises me is one that basically can be distilled to "should I study legacy, old, current or future technologies when going to school, getting a certification or working in my home lab." I always am shocked by this, so I think that looking into this is worthwhile.

    The simple answer is... when studying you always want to learn the latest, greatest material that you can. You want to be looking at proposed ideas, the latest releases, beta code - whatever it takes to get ahead of the curve.

    But why? This is easy. You study for the career in your future, not the one in your past. As you study, time progresses. What you learn today does get to be used until tomorrow. The bigger the area of study, the bigger the impact of this effect. Going to university, for example, means that what you learn at the beginning will be four to nine years before having its first chance to be applied in the real world. A certification might be months or years. And that is assuming that you can apply the knowledge immediately, which is rarely the case.

    No matter what you are studying for today, you will be lucky if that is not completely out of date by the time that you are ready to leverage that training and knowledge - and that assumes that you are studying as far into the future as possible.

    There are many people that suggest studying what is already mainstream or even old (like getting certifications on Windows Server 2012 R2 even when 2016 is already available.) This really does not make sense. The knowledge of 2012 R2 is, in a way, educational debt. It has diminishing value. It is worth less day after day. The older that you study, the greater this effect. Even if 2012 R2 knowledge is more useful today, it won't be tomorrow. And pretty much all employers consider current knowledge equal to or greater than old knowledge, even if they themselves run older systems currently.

    There is simply no effective value to old learning, learning what is current and future is far better.

  • A great example is 2012 R2 and 2016. If you learn 2012 R2, you are okay today but when 2016 comes out you might be in the dark. The person with the 2016 knowledge is the one that will be useful in preparing for, migrating to and managing 2016. Their future career is much better than yours with old knowledge. The person with the 2016 certs is equal with the person with 2012 R2 certs today, but any forward looking and the person with the 2016 certs pulls ahead.

  • This is why the CompTIA A+ is partially so awful. It often contains data not just failing to be current, but often half a decade or more behind! Knowledge that is incredibly outdated and worthless even before the certification was written.

  • The same problem plagues universities trying to act as trade schools - they often have information, tools, training and skills that are years or even decades behind. If you assume a school is five years behind, and they teach skills for five years and it takes graduates five years to get to mid-level careers where most of that learning can be used for the first time we are talking about fifteen years behind the times on that knowledge!

  • There is, of course, value in history. But learning what is current teaches us about history naturally. We do not need to intentionally learn old ways of doing things to know about them. And by learning current knowledge and techniques we filter out the time that would be wasted learning about dead end technologies, approaches, ideas, etc.

    For example, learning about RAID 5 is useful. Learning about RAID 5 teaches us a lot about RAID 4. Learning RAID 2 and 3 is totally useless. These exist nowhere, cannot be used and should not be. Knowing exactly how and why they were poor and left behind has no value. But if we study old data we might spend a lot of time learning these vestiges and wasting time that could be used to learn things more valuable.

  • Also.... getting resources on current and old technology and techniques is easy, these things are tested, documented, tried and true. They are well known. We can Google those things or talk to experienced people - knowledge on them is readily available. As long as we are training on current technology and techniques, it is trivial to get resources to gap us over on those occasions when we need to work on older ones.

    For example, someone training on Windows Server 2016 can easy use Windows 2000. But a person training on Windows 2000 will have a much larger learning curve working on Windows Server 2016. And the person with the 2016 training and experience can bring valuable knowledge to working on 2000 (like how to use it to best prepare for updating later) that does not exist in the opposite direction.

Log in to reply