Tutorial :Black hat knowledge for white hat programmers [closed]


There's always skepticism from non-programmers when honest developers learn the techniques of black hat hackers. Obviously though, we need to learn many of their tricks so we can keep our own security up to par.

To what extent do you think an honest programmer needs to know the methods of malicious programmers?


I'm coming in late on this, as I just heard about it on the podcast. However, I'll offer my opinion as someone who has worked on the security team of a software company.

We actually took developer education very seriously, and we'd give as many teams of developers as possible basic training in secure development. Thinking about security really does require a shift in thinking from normal development, so we'd try to get developers thinking in a how-to-break-things frame of mind. One prop we used was one of those home safes with the digital keypad. We'd let developers examine it inside and out to try to come up with a way of breaking in to it. (The solution was to put pressure on the handle while giving the safe a sharp bash on the top, which would cause the bolt to bounce on its spring in the solenoid.) While we wouldn't give them specific black-hat techniques, we'd talk about the implementation errors that cause those vulnerabilities -- especially things they might not have encountered before, like integer overflows or compilers optimising out function calls (like memset to clear passwords). We published a monthly security newsletter internally, which invited developers to spot security-related bugs in small code samples, which certainly showed how much they would miss.

We also tried to follow Microsoft's Security Development Lifecycle, which would involve getting developers to talk about the architecture of their products and figure out the assets and possible ways to attack those assets.

As for the security team, who were mostly former developers, understanding the black-hat techniques was very important to us. One of the things we were responsible for was receiving security alerts from third parties, and knowing how difficult it would be for a black hat to exploit some weakness was an important part of the triage and investigation processes. And yes, on occasion that has involved me stepping through a debugger to calculate memory offsets of vulnerable routines and patching binary executables.

The real problem, though, is that a lot of this was beyond developers' abilities. Any reasonably sized company is going to have many developers who are good enough at writing code, but just do not have the security mindset. So my answer to your question is this: expecting all developers to have black-hat knowledge would be an unwelcome and detrimental burden, but somebody in your company should have that knowledge, whether it be a security audit and response team, or just senior developers.


At the end of the day nothing the 'black hats' know is criminal knowledge, it's just how the knowledge is applied. Having a deep understanding of any technology is valuable as a programmer, it's how we get the best out of the system. It's possible to get by these days without knowing the depths as we've more and more frameworks, libraries and components that have been written using such knowledge to save you having to know everything but it's still good to dig from time to time.


I'm going to be a bit heretical and go out on a limb and say:

  • You really need to talk to the sysadmin/network folks that secure their machines. These folks deal with the concept of break-ins every day, and are always on the lookout for potential exploits to be used against them. For the most part, ignore the "motivation" aspect of how attackers think, as the days of "hacking for notoriety" are long gone. Focus instead on methodology. A competent admin will be able to demonstrate this easily.

When you write a program, you are presenting what is (hopefully) a seamless, smooth interface to ${whatever-else-accepts-your-programs-I/O}. In this case, it may be an end-user, or it may be another process on another machine, but it doesn't matter. ALWAYS assume that the "client" of your application is potentially hostile, regardless if it's a machine or a person.

Don't believe me? Try writing a small app that takes sales orders from salespeople, then have a company rule that you need to enforce through that app, but the salespeople are constantly trying to get around so they can make more money. Just this little exercise alone will demonstrate how a motivated attacker - in this case, the intended end-user - will be actively searching for ways to either exploit flaws in logic, or to game the system by other means. And these are trusted end-users!

Multiplayer online games are constantly in a war against cheaters because the server software typically trusts the client; and in all cases, the client can and will be hacked, resulting in players gaming the system. Think about this - here we have people who are simply enjoying themselves, and they will use extreme measures to gain the upper hand in an activity that doesn't involve making money.

Just imagine the motivation of a professional bot herder who makes their money for a living this way...writing malware so they can use other people's machines as revenue generators, selling out their botnets to the highest bidder for massive spam floods...yes, this really does happen.

Regardless of motivation, the point remains, your program can, and at some point will, be under attack. It's not enough to protect against buffer overflows, stack smashing, stack execution (code-as-data is loaded into the stack, then a return is done to unload the stack, leading to execution of the code), data execution, cross-site scripting, privilege escalation, race conditions, or other "programmatic" attacks, although it does help. In addition to your "standard" programmatic defenses, you'll also need to think in terms of trust, verification, identity, and credentials - in other words, dealing with whatever is providing your program input and whatever is consuming your program's output. For example, how does one defend against DNS poisoning from a programmatic perspective? And sometimes, you can't avoid things in code - getting your end-users to not turn over their passwords to coworkers is an example.

Incorporate those concepts into a methodology for security, rather than a "technology". Security is a process, not a product. When you start thinking about what's "on the other side" of your program, and the methods you can employ to mitigate those issues, it will become much clearer as to what can go right, and what can go horribly wrong.


To a large extent. You need to think like a criminal, or you're not paranoid enough.


To what extent do you think an honest programmer needs to know the methods of malicious programmers?

You need to know more than them.


I do work as a security guy not a developer and based on my experience I can simply say you can't learn stuff as good as black hat or professional white hats do unless it's your second profession. It just too time consuming.

Most important bit though seeing some bad guys or professionals on action and understanding what are the possibilities and the impact of unsecure code.

So by learning some tricks but lots of them one might get feeling of "false sense of security" because he or she can't hack. Although a better skilled attacker might hack the same thing within minutes.

Having said that, as soon as you keep this in mind I think it's good to learn some attacks, fun and quite educational to learn how to break stuff.


It pays to be as "innocent as doves, and as wise as serpents," and learn techniques that folks with nefarious purposes do. That said, such knowledge should be used carefully. "With great power comes great responsibility".


Definitely learn the dark side. Even if you don't learn the actual techniques, at least make the effort to learn what's possible.

alt text http://ecx.images-amazon.com/images/I/51rqNSV141L._BO2,204,203,200_PIsitb-sticker-arrow-click,TopRight,35,-76_AA240_SH20_OU01_.jpgalt text http://ecx.images-amazon.com/images/I/519BX6GJZVL._BO2,204,203,200_PIsitb-sticker-arrow-click,TopRight,35,-76_AA240_SH20_OU01_.jpg

Good resources to learn the tricks of the trade are Reversing: Secrets of Reverse Engineering and Hacking: The Art of Exploitation. They're written for both sides - these could be used to LEARN how to hack, but they also give ways to prevent these kinds of attacks.


One word of caution: State of Oregon vs. Randal Schwartz.

Having been a small part of investigating two separate incidents at our site, I'd say the odds of learning about an exploit before it's used against you are vanishingly small. Perhaps if you dedicate your career to being a white hat you'll stay on top of all the potential holes in most of the popular hardware/software stacks. But for an ordinary programmer, you are more likely to be in reaction mode.

You do have a responsibility to know how your own software could be hacked and a responsibility to stay reasonably up-to-date with third-party software. It would be good to have an emergency plan in place to deal with an attack, especially if you are a high-profile or high-value target. Some places will want to shut a hole immediately, but our site tends to leave certain holes open to assist law enforcement in catching the perpetrators. The IT security team occasionally announces internally that it will be conducting a port scan so that SA's don't freak out about it.


I personally don't see the technical difference. Sure the motives are different but the technical game is the same. It's like asking what kind of warfare the "goodies" need to know about.

The answer is all of it, even if they don't actively practice it.


One skill that is often missed is social engineering.

Many people simply do not recognize when they are being conned. At a prior company a VP ran a test by having three (female) temp workers in a conference room call programmers and sysadmins and work from script to try and get someone to grant access or reveal passwords. Each of the temps got access to something in the first hour of calls.

I bet if a similar test were run at any mid to large sized company, they would get the same results.


I think part of 'coding defensively' includes knowing malicious techniques, but at the same time, you don't necessarily need to know all of the techniques in order to defend against them effectively. For instance, knowing about buffer-overflow attacks isn't the reason to try and protect your buffers from overflowing. You protect them from overflowing because if they do, it could wreak havoc in your program regardless of whether it's a bug or an attack.

If you write very thoroughly checked and well architected code, then the malicious attacks will be unable to penetrate, because good architecture should automatically lock out side-effects and unauthorized access.

However, that last paragraph assumes that we have a perfect job where we are given incredible amounts of time to make our code just right. Since such a job doesn't exist, then knowing the malicious techniques is a good shortcut, because it means that although your code isn't perfect, you can create 'un-workarounds' for those exploits to make sure that they do not get through. But, those don't make the code better, and they don't make the application better.

Ultimately, knowing malicious exploits is something that is good to be aware of, but 95% of them will be covered by simply making sure you adhere to best practices.


Design for evil. "When good is dumb, evil will always triumph."

In short, if you don't think like a criminal, that doesn't mean the criminals won't.


One of the techniques the White Hats need to learn is how to test/mitigate/think in terms of social engineering, because the biggest security threat is people.

White Hats are good at manipulating bits, but people are the ones manipulated far more often by the Black Hats.


I'm going to take the controversial stance and say that there's some black-hat knowledge that you don't need to be a good white-hat hacker. A doctor doesn't need to know how to genetically engineer a virus in order to effectively treat illness.


we white hats and gray hats need to be good at a million things those black hats and skiddies only have to succeed with one thing


Basically almost all security exploits used by hackers are bugs in the code introduced by poor programming style or disciplines. If you write code to protect against bad data and invalid call operations you'll block the majority of security vulnerabilities in your code.

If you're interested to protect your code from hacking/abuse/etc. you'll spend way too much time on it. Just buy a package to protect the basics and just move on.


You have to understand the methods the 'bad guys' use, so some understanding is mandatory.

For the average developer I think it is enough to grok the basic priciple of what they are doing in order to avoid creating vulnerabilities in their projects.

For somebody who works in a security relevant area (banking comes to mind, or credit card data from an online shop), a deeper understanding is required. Those developers need to go 'under the hood' of how a 'bad guy' operates and which techniques he uses.


To the point where by learning their ways he starts to think in their direction. And then he must choose which side he wants "to belong to".

There is nothing malicious in technology itself ... knowledge is pure ... it's how you use it that determines how it shall be looked upon.


2 sides of the same coin. Other than intent -- what's the question? Same skills, different implementation.


When I hear the word blackhat, I think of someone who uses knowledge of computers to break into banks and do other mischievous things. A whitehat knows everything that the blackhat knows, but just doesn't do anything bad with it.

Therefore, you don't have care or know what a "blackhat" is to be secure...

Knowing how a blackhat thinks, when you're already an equivalent whitehat doesn't help squat. It's like knowing, "John wants to break into my house and steal my iPod music". If you really cared about your iPod music, you should have had it secure anyways.

Note:If u also have question or solution just comment us below or mail us on toontricks1994@gmail.com
Next Post »