Learning from History – The Conficker Outbreak

History is a great teacher and provides us with a wealth of learning. The learning from the past is relevant not merely for anecdotal reasons but also for the wisdom that we can gain from it. Dipping into this pool of history can help to comprehend the present, improve our response and avoid mistakes that may have been committed in the past.

The origins of the Internet and computers trace their path from ideas and values that focused on sharing capabilities, collaborating with peers and realizing new possibilities. The user base for computers was much smaller during the early days of computer technology. Computer systems resembled isolated islands that were siloed rather than connected. The degree of security available seemed adequate for the era. The situation changed with the popularity of personal computing, the universal adoption of TCP/IP and the rise of the ISP that brought Internet within the reach of a much wider population. The connected world embraced a much larger user base comprising of users of differing hues and varying intent – this was new cyberworld! It brought technology within the reach of people who possessed ideas, passion and abilities to dream and implement new possibilities. Some of the possibilities unfortunately were not entirely benevolent. Though smart, many actions were selfish or outright evil. The cyberworld has faced this reality over the last few years. The IT infrastructure found itself to be at risk in the face of this new reality. Institutions, businesses and users put forth its responses to this challenge. New approaches and countermeasures were implemented at the network and application layers, at the endpoint as well across the entire lifecycle.

Implementing the controls and counter measures have their own challenges. Implementing some of the solutions, required human effort. Compared to machines, humans possess a peculiar trait. Machines are capable of executing instructions as directed without distraction or digression. Humans on the other hand develop their own styles and work habits ! Humans seem bonded by practices they may have been accustomed to over the years. Developing of code that powers the technology world is performed by humans.

IT infrastructure including the hardware devices operate and are controlled by code that is wired into the hosts and devices. Most of the code that exists in our world is not generated by algorithms but created by humans. Since the code is written by humans, the code is susceptible to human error and coding flaws. It is important to reduce the possibility of such errors or flaws. To counter such situations, solutions that seek to minimise the possibility of coding flaws have been developed and implemented. Implementing these solutions support the development of a better quality code that is robust and reduced levels of coding bugs or flaws.

The story behind MS08-067 patch released by Microsoft presents an interesting narrative of the issues that trace its origins to faulty code. The story also, has lessons on responding effectively to security flaws. This story unfolded in the month of September 2008. There were reports around that time of hackers from China selling a Windows exploit kit. Using this exploit kit, it was possible to exploit Windows flaw and execute a buffer overflow at port 445. ( reference : A Foray into Conficker’s Logic and Rendezvous Points by Phillip Porras and Hassen Saïdi and Vinod Yegneswaran; Computer Science Laboratory, SRI International) This exploit was being sold for about $37 by these hackers. Around the time that this exploit kit was being sold, the malware Gimmiv was infecting computers using this very Windows vulnerability. It is therefore believed that Gimmiv may have used the exploit sold by these hackers.

When this vulnerability became known, Microsoft released a patch on 23rd October to plug this vulnerability. The date of release of the patch itself served as an advertisement for publicizing the significance and importance of the patch. Microsoft patches are normally released on the second Tuesday of the month known as ‘Patch Tuesday’ within professional circles. The date 23rd October did not correspond to a ‘Patch Tueday’. A patch that could not wait till the next Patch Tuesday and needed to be released earlier was likely to be an emergency patch having significant repercussions. To potential hackers therefore, it was an indication that hosts not applying this patch were vulnerable and waiting to be exploited.

Information about such patches is a double-edged sword indeed. Hosts that apply the patch are protected against potential exploits. Those hosts that do not apply the patch remain vulnerable. Considering that some hosts may not have applied the patch urgently, the hackers are presented with an opportunity to target such unprotected and vulnerable hosts that are susceptible to exploitation.

It is interesting to note that the MS08-067 patch was released on 23rd October 2008. In less than a month, around 20th November 2008 indications of a sophisticated worm – Conficker that exploited the same vulnerability were noted. Thus, hosts that had not applied the patch were are a significant risk. Conficker exploited the vulnerability, infested the host with malicious code and created a botnet. Within a month since it was first discovered the infection had spread rapidly and infected 500,000 hosts. At its peak, Conficker had infected over 10 million hosts. The worm had used the exploit to gain entry to the hosts using the vulnerability over port 445, trigger a buffer overflow and write itself in the memory as a DLL. Once inside the host, Conficker opened a backdoor through the firewall. For a while the infection remained dormant waiting to connect to the botmaster. Conficker had an inbuilt domain generating algorithm. The algorithm generated 250 new domains each day. Behind one of the 250 domains lay the botmaster that infected hosts attempted to connect. Since new domain names were generated dynamically every day, it was difficult to identify any single domain as the botmaster and attempt to shut down the entire botnet by targeting the single domain. The worm also used sophisticated encryption to both pack and protect itself against reverse engineering. Overall, Conficker represented a very sophisticated piece of malware that was difficult to defeat.

It took herculean effort and co-operation of multiple talented and dedicated individuals to take the fight back to Conficker. This was not the only malware or the only botnet to have existed. There have been examples of malware even before Conficker came into existence and it is likely that many more may be born in future.

The patch to protect against the vulnerability exploited by Conficker was available before the outbreak of the Conficker malware. Thus, it was not the absence of a solution that was the problem. The issue was that of not applying the solution or probably not even knowing about the solution in a timely manner. Implementing an efficient mechanism to determine the patch levels is an essential security hygiene factor. In addition, solutions that provide timely information pertaining to the availability of patches is also very important for protecting the IT infrastructure. Implementing such solutions may help to patch vulnerable hosts within a short duration of the patches being available. This can help to reduce the risk of unpatched and hence vulnerable hosts. History can thus be a great teacher, but only if we are sincere enough to learn.

Ad
Join over 500,000 cybersecurity professionals in our LinkedIn group "Information Security Community"!

No posts to display