My entry in Tribe of Hackers

tribe-of-hackers-interview.jpg

Tim here:

If you haven't read Tribe of Hackers, you absolutely should. Marcus Carey’s take on Tim Ferriss’s Tribe of Mentors is brilliant. It is an incredible deep dive into some of the best minds in cybersecurity. I found the questions insightful and wanted to add my own answers to the public record.

1. If there is one myth that you could debunk in cybersecurity, what would it be?

Mac versus PC. Apple versus Android. It doesn't matter. They are all so close to each other at this point is is personal preference and what really matters most is that they are kept up to date

2. What is one of the biggest bang-for-the-buck actions that an organization can take to improve their cybersecurity posture?

Asset management. I can't tell you how many organizations I have walked into that have big money cybersecurity appliances and certifications without a function asset management system. You can't protect yourself if you don't know where anything is.

3. How is it that cybersecurity spending is increasing but breaches are still happening?

A lack of focus on the fundamentals. Again, these companies are spending thousands of dollars and man hours implementing these fancy new tools with a bunch of buzzwords (Next Gen, AI, Blockchain, etc) that aren't going to provide any ROI because they don't have asset management or regular patching down.

4. Do you need a college degree or certification to be a cybersecurity professional?

Nope! This is a field where you can succeed without a formal education. Certifications and degrees are great to pass the HR check but I have met people with no degree that have forgotten more about security than I will ever know. I have also met people with a degree that don't know a thing about working in security. It is not about being able to take a test. It is about being able to learn, make mistakes, and apply what you learned from those mistakes.

5. How did you get started in the cybersecurity field, and what advice would you give to a beginner pursuing a career in cybersecurity?

I got my start in cyber security when my Dad told me I couldn't have the internet in my room. I went on to Frankenstein a laptop together. Found a way to download Whoppix, a now defunct precursor to Kali, and booted up my laptop with it. Used money I saved working summer jobs to buy a compatible wireless card that would allow me to crack the wireless password and get on the internet. I was eventually caught and my Dad was furious but from then on, a fire had been lit and I needed to now more and more.

As far as pursuing a career in infosec? The best advice I can give is show up and network. There are plenty of cybersecurity related conferences both local and remote, as well as online communities (check out http://infosecjobs.world/!). Get out there and meet the people in the community. By showing up and showing your interested, you are almost guaranteed to run into someone who will recognize your passion and give you a shot.

6. What is your specialty in cybersecurity? How can others gain expertise in your specialty?

I would have to say my specialty is understanding cyber risk and being able to explain that to the business. There are a lot of news stories about breaches and zero days and most companies don't know what to do with that information. So they are very reactionary. And when some new vulnerability makes the news, and the snake oil salesmen come out of the woodwork, I am there to separate the wheat from the chaff. Should you try and patch Meltdown and Spectre vulnerabilities? Yes. Are you going to be compromised by an attacker using those? Probably not. Are you going to get compromised by a Flash or Java vulnerability you didn't patch? Absolutely.

As far as how to obtain this triage expertise. The best advise I can give is try and work in as many different environments as possible. MSPs and MSSPs are great for this. They give you a great chance to touch a lot of different environments and different lines of business to really understand what they need to function as well as what their risks are.

7. What is your advice for career success when it comes to getting hired, climbing the corporate ladder, or starting a company in cybersecurity?

Be yourself and be honest. There are a lot of people in this industry who feel the need to never be wrong. To shift blame off of themselves. To be rockstar/ninja/pirate whatever. These people are eventually found out. If you are yourself and you are honest about what you do and own the mistakes you've made you will go far, not only in this industry, but in life.

8. What qualities do you believe all highly successful cybersecurity professionals share?

It is cliche but a passion for what they do. This field is not one that you can get a certificate or degree and never expand beyond that. You need to have passion to keep learning to keep up with this industry. The other quality I see is the ability to do quality self care. This industry can burn you out fast and if you don't stop and take a step back once and awhile, you are guaranteed to burn out.

9. What is the best book or movie that can be used to illustrate cybersecurity challenges?

Star Wars: Rogue One. Stay with me. The Empire, a massive and wealthy organization, lost its most critical asset to a foreign adversary that was familiar with their operating procedures using their own tech against them (K-2SO).

10. What is your favorite hacker movie?

Hackers (1995). I can quote it word for word. That and the Swedish Girl with the Dragon Tattoo series. It does not get nearly enough credit and is closer to the books than anything.

11. What are your favorite books for motivation, personal development, or enjoyment?

I love Extreme Ownership by Jocko Willink. It really shows how you can lead from anywhere inside an organization. As far as enjoyment is concerned; I love anything by Clive Cussler. They are formulaic and there is something comforting about that.

12. What is some practical cybersecurity advice you give to people at home in the age of social media and the Internet of Things?

This is something I am wildly passionate about and can expound for hours on. But if I had to sum it up to one golden rule it would be: Limit your exposure. Is your personal information worth 50 cents off at the grocery store. The less information you give to companies about you, the better.

13. What is a life hack that you’d like to share?

A.B.K. Always be Knolling. It is a process of arranging like objects at 90-degree angles. It's the reason Apple stores look so nice and I use it to organize my desk. It has made my desk, and by extension my mind, much less cluttered.

14. What is the biggest mistake you’ve ever made, and how did you recover from it?

I have made a lot of mistakes. No one stands out as worthy of inclusion. What I will say is that every mistake is an opportunity to learn. Making mistakes is not a bad thing. Repeating them is.

Network IDS/IPS for the Everyday Admin

All too often I hear the complexities of managing a centrallly deployed intrusion detection and prevention system in various types of environments. Some might add that the cost of implementing, maintaining, updating, and training personnel is becoming a nefarious act of constant contact with one’s bank accounts. This doesn’t always need to be the case, and is why I like to highlight the options one has when attempted to build-up perimeter security at the drop of a hat. The options listed below are not endorsed specifically by Black Bear, nor are these the only options available. Keep in mind that your particular solution needs to be scalable, affordable, manageable, and overall tunable to the requirements of your network.

Let’s start this one off with a BANG shall we? A topic of conversation to be had over various types of food and drink could be the inclusion of Cloud security implementation. What one does in order to fully secure someone else’s data center, you ask? Well, in order to fully implement a solution that allows you to monitor, track, and potentially block unwanted traffic, you’ll need to provide a system that also works with hosted data center operations such as AWS

The EC2 Instance offers several products that you can implement within AWS administration. Again, there may be more out there, but these are the ones that have been vetted by our teams the most.

  • McAfee Virtual Network Security Platform (vNSP)

  • Trend Micro Deep Security

  • Alert Logic Threat Manager

Feel free to research these products individually or compliment with your own hosted version of the next recommendations.

The Bro Network Security Monitor - @Bro_IDS - has always been a personal favorite of mine. The language used is highly adaptable and being an open-source product, the development of the project from the community perspective is heavily established.

  • Bro is easily downloaded and running in minutes. Optional dependencies can be brought down separately such as a send mail server to enable the Bro/BroControl alerting to email feature or LibGeoIP for geolocation IP addresses.

  • The development code base is readily available on git - git clone —recursive git://git.bro.org/bro

  • The Bro Center of expertise, funded by the NSF, is available to provide extended support to all Bro users at an extremely low-cost.

  • Catch on their Bro conferences such as BroCon in Arlington, VA (2018) and the Bro Workshop in the EU (2018)

The Security Onion - @SecurityOnion - has long been another personal favorite of mine and widely used amongst IT Security professionals on the basis of power, efficiency, familiarity and minimal installation overhead. Although the SO is not exclusively an IDS/IPS tool directly, it utilizes other IDS tools (as previously mentioned, Bro, and others such as Snort, Suricata, and HIDS based tools such as OSSEC to collect all the information you need in making secure decisions about your networks and hosts. Built-in tools such as Squert provide allow you to query the data easily.

Althought the Security Onion is not exclusively an IDS/IPS program, it brings many of the popular referenced tools together in one suite. It is all encompassing and extremely powerful in bringing visibility to your network’s traffic.

Cisco’s Firepower and FireAMP - @cisco - One of the bigger box company solutions that has been up and coming in recent years are the Firepower systems provided by Cisco. The base ideals set forth by the FP systems are built on Snort detection engines for IDS/IPS and an integration with HIDS based software and IronPort systems through the AMP product. Literally, AMP is everywhere in the Cisco world! This model gives you end-to-end visibility into exactly what your network is doing, how often it is doing it, and where IT is coming from or moving to. Althought the Firepower product itself is not exclusive to IDS/IPS, as it includes modules for host identification, simple URL filtering, and VPN concentration, the overall layered approach to detecting unwanted traffic is still the gleaming purpose of the Cisco Firepower system.

Here are some highlighted items to take into consideration when using Firepower.

  • Policies are exclusive to HTTPS, HTTP, File, and TCP/UDP based traffic.

  • Keep in mind that in order for the Firepower to process data about the traffic within the network, it needs to be “implanted” on everyday endpoint or pass-through point, including the perimeter, the hosts, and cloud-connected services such as IronPort.

  • Source and destination security zones are key in determining the sensors for the traffic. Without clearly defining these zones, traffic will not be logged, analyzed, or blocked

  • Keeping up with the recommendations added to the Snort definitions can be daunting in the interface itself. Take advantage of scheduled updates to the Snort selections by utilizing a timed installation and reviewing the change reports prior to final tuning.

Overall, there are numerous solutions available, both free and at cost, too allow business the security and flexibility of modern day IDS/IPS systems. It’s important to understand the needs of your organization and the traansparency that you require over the landscape of any network prior to implementing an IDS/IPS system. Modern day systems provide less hassle in maintaining complex rule sets and responding to monitored incidents than before. That also means that tuning these systems is key in order to provide the most benefit without overloading the network, or your users. As such, here are some bullet points to consider when implementing IDS/IPS.

  • Cost must be efficient in all scenarios. You never want to have to answer to security budget defencies by cutting the IDS system out of the equation. These systems should be implemented as a necessity to current Internet usage trends, especially with the ever increasing mobile and mobile device workforce. Make sure to allocate more than enough funds annually and always come in under budget.

  • Tuning for performance, and efficiency. The IDS system and alert on all events and monitor large amounts of traffic. To ensure the usefulness of the system, only flagged events would require and alert while others can be simply discarded. It does beg the question, however, if there are no alerts, is the system really working as designed? If there are TOO many alerts, this could also be the case. Thresholds to alert should be evaluated based on their legitimacy; identifying false-positives wherever possible. If the IPS is also tuned to drop events that are related to alerts, you want to ensure that false-positives are addressed appropriately. Otherwise legitimate traffic may be blocked.

  • Throughout the article, I lump IDS/IPS systems in one liners. The truth is, they have their differences from an operational perspective. The IDS system is typically out-of-band. It does not need to be implanted in all areas of the network as inline and can be used out-of-band in order to limit the ill-effects of an IDS failure causing overall network communication issues. IPS systems are almost always inline with the existing network traffic and connectivity. They typically will have a fail-open or fail-closed requirement; if the IPS system is no longer able to block traffic, it should stop all network traffic or allow all network traffic, depending on your availability model.

I hope this article was informative and sparks interest in using some of the options listed above. Stay tuned next time for ideas on how to implement proper IDS or IPS strategy within your network, using one or more of the options from this article.

GDPR - 3 Weeks Later

GDPR-696x312.png

It’s no secret the General Data Protection Regulation (GDPR) has taken the Internet by storm. Within the three weeks prior to the enforcement due date of May 25th, 2018 <insert acceptable time zone mapping>, thousands of e-mails and website postings notified their respective users of their updated privacy policies; some requiring an acceptance of the updates terms via one-click checkboxes. While this last-minute push into GDPR compliance remained consistent over the course of a month or so prior to the deadline, let’s look at the weeks following 05/25/18 and how this response is affecting consumers and businesses alike.

Week 1 – Websites Block Traffic, Complaints Filed, Government Agencies Compromised

The immediate reaction of several large-scale media operations was to simply shut down their public access from specific areas of the world (EU). This sort of reaction may be in response to non-preparedness, needing more time to prepare or otherwise just not knowing what steps need to be completed to remove the risk of non-compliance. Most of the businesses operating out of the U.S. could be taking a reactive approach in waiting; allowing the other companies to be subjected to the fines while learning what they could have done right. A proactive approach is more justified in terms of compliance, however, GDPR leaves a few stones uncovered (or at least unclear) in a handful of policy-driven areas.

Privacy Activist Schrems quickly threw his hat into the ring, along with several others who run in his circles. Amongst the complaints filed so far are La Quadrature du Net, Center for Digital Democracy, the DPA (Belgium) and the DSB (Austria). Some of the complaints were filed against top tech players Facebook, Instagram, and Google.

Immediately following the deadline, the focus of potentially harmful data breaches expanded from the technology-based companies listed above to a Government agency; The European Commission. They reportedly and inadvertently leaked the personal information on hundreds of European citizens. Information included names, addresses, and professions as well as postal information on certain British citizens. As the GDPR is not in effect for the Commission until later this year, the breach did not require notification compliance under the GDPR.

 

Week 2 – New Guides, Positive Outlooks, More Guides

Immediately following the initial complaints and somewhat melancholy response via the media as to exactly how hard the GDPR was going to hit, people are starting to see the forest through the trees or something. The week of June 4th added several hundred variations of GDPR compliance guides; each one with their own roadmaps and summarizations of the GDPR definitions. All (hopefully) accurate, but only time on the Internet will tell.

One notably efficient 241-page guide can be found HERE, which outlines detailed information and a myriad of helpful links surrounding the GDPR compliance paths. June 4th, 2018 provided additional updates to the guide that’s been circling for some time now. The one-line that I get the most questions about is regarding the purpose limitation principle, i.e. how data can be transparently collected as a controlling entity of said data. The guide lays it out via the Article 5(1)(b).

“1. Personal data shall be:

(b) collected for specified, explicit and legitimate purposes and not further processed in a manner

that is incompatible with those purposes; further processing for archiving purposes in the public

interest, scientific or historical research purposes or statistical purposes shall, in accordance with

Article 89(1), not be incompatible with the initial purposes.”

 

In Summary, via the guide – Reference Page 23 of the ICO.ORG.UK site – Guide to the General Data Protection Regulation (GDPR).

 "In practice, this means that you must:

be clear from the outset why you are collecting personal data and what you intend to do with it;

comply with your documentation obligations to specify your purposes;

comply with your transparency obligations to inform individuals about your purposes; and

ensure that if you plan to use or disclose personal data for any purpose that is additional to or

different from the originally specified purpose, the new use is fair, lawful and transparent."

 

Week 3 – Well that’s this week!

 

So far, we’ve noticed an uptick in articles posted that outline the common theme of “Companies still aren’t ready”. Only 47 percent of surveyed law firms were found to be “fully prepared” to meet the GDPR requirements. It is true that most companies are still in the process of moving towards full GDPR compliance and this is mostly due to the lack of talent familiar with GDPRoutline. No companies prior to 2016 had the resources available to tackle the procedures needed to implement GDPR across the board and now they’re suffering to get something in place.

 

The question is, why three weeks post-May25th, 2018 <insert appropriate time zone>? Again, lack of knowledge on how to navigate the GDPR inner workings while selecting appropriate items to act on within the organization. The major fail point in the efforts towards compliance has been understanding how each compliance item relates to one another while maintaining the full knowledge of what doesn’t apply and where. A proper GDPR data discovery with associated mapping is key in moving towards GDPR compliance. This way, you’ll know where your data resides and how to secure it as it relates to the regulation. Hopefully, in the coming weeks, we see more compliance-related articles with information on how companies succeeded, rather than more posts of how everyone is still behind; three weeks later.

Run your own Standard Notes Syncing Server

alt text
Stanard Notes is an open source notes app that offers E2E encryption, syncing notes between devices with support for clients on almost any platform, and, most importantly, you can run your own syncing server. Standard Notes backend server is called Standard File and Standard Notes makes it super easy to deploy a server in AWS which can be done either manually, or through a provided preconfigured image. The documentation provided is great and makes it really simple to deploy a server quickly, but I don't use AWS. So here are a set of instructions for if you want to self host the server at another provider (DigitalOcean, Vultr, OVH, etc) or on your own hardware using Ubuntu 16.04. First things first, you need a server. I am using the cheapest of the cheap available from DigitalOcean. This gives me 1GB of RAM and 25GB of storage for 5 bucks a month.
alt text

Some DigitalOcean Specifics:

If you deciede to follow these steps to the letter. You will need to create a swap file or Standard File will never start.

Create a swap file

    sudo fallocate -l 1G /mnt/swap
    sudo mkswap /mnt/swap
    sudo chmod 0600 /mnt/swap
    sudo swapon /mnt/swap

Add the following line to the end of your /etc/fstab to make is persistent across reboots.

    /mnt/swap none swap sw 0 0

With the swap situation sorted we can start on the actual install by getting some dependencies installed. This included grabbing rvm which we will use to install the latest version of Ruby.

    sudo apt install mysql-server rubygems libcurl4-openssl-dev gnupg2 git
    gpg2 --recv-keys 409B6B1796C275462A1703113804BB82D39DC0E3
    curl -sSL https://get.rvm.io | bash -s stable

Once everything is installed we need to add rvm to our path and then install ruby

    source /etc/profile.d/rvm.sh
    rvm install ruby
    rvm use ruby

Ruby -v should show something similar

    ruby 2.4.1p111 (2017-03-22 revision 58053) [x86_64-linux]

Standard File requries bundler so we'll install that now

    gem install bundler --no-ri --no-rdoc

Next we need to create a database for Standard File to use. First start the service

    sudo service mysql start

Then use mysql_secure_installation to configure the service. You can accept the defaults for most of the prompts with the exeception of the three listed below.

sudo mysql_secure_installation
    remove anonymous users (Y)
    disallow root login remotely (Y)
    remove test database and access to it (Y)

Now login to mysql and create the standard file database

    mysql -u root -p
     > create database standard_file;
     > quit;

Standard File uses Nginx via Passenger to run so we will need to install Passenger first

    gem install rubygems-update --no-rdoc --no-ri
    update_rubygems
    gem install passenger --no-rdoc --no-ri

Next we'll use Passenger to install Nginx.

First allow others to execute from your home folder sudo chmod o+x "/home/$user" Then use Passenger to install Nginx with the following settings

    rvmsudo passenger-install-nginx-module
        Select Ruby
        Select  1. Yes: download, compile and install Nginx for me. (recommended)
        Install to /opt/nginx

Validate that everything installed correctly

    rvmsudo passenger-config validate-install
        Select Passendger itself

If everything looks good then you should see the following:

    Everything looks good. :-)

Next we'll setup a TLS certificate with Let's Encrypt.
First get Let's Encrypt and put in /opt

    sudo chown $user /opt
    cd /opt
    git clone https://github.com/letsencrypt/letsencrypt
    cd letsencrypt

Then run Let's Encrypt to get a cert but not install it. If you try to let it install the cert, it will install in the wrong location.

    ./letsencrypt-auto certonly --standalone --debug

Note the location of the certificates, typically

    /etc/letsencrypt/live/$domain.com/fullchain.pem

Replace $domain with the name or IP address you used to create the certificate

Nginx now needs configured to use the certificate.

    sudo nano /opt/nginx/conf/nginx.conf

Add this to the bottom of the file, inside the last curly brace:

     server {
         listen 443 ssl default_server;
         ssl_certificate /etc/letsencrypt/live/$domain.com/fullchain.pem;
         ssl_certificate_key /etc/letsencrypt/live/$domain.com/privkey.pem;
         server_name domain.com;
         passenger_enabled on;
         passenger_app_env production;
         root /home/$user/ruby-server/public;
       }

Now that we have all the backend work complete, we can actually setup the Standard File server.
Grab the source from Github and put it in your home folder

    cd ~
    git clone https://github.com/standardfile/ruby-server.git

Enter the folder and install the Ruby dependencies cd ruby-server bundle install bower install rails assets:precompile

We need to create a .env file for the Rails app to load that tells it to use the database we setup earleir.

    RAILS_ENV=production
    SECRET_KEY_BASE=use "bundle exec rake secret"

    DB_HOST=localhost
    DB_PORT=3306
    DB_DATABASE=standard_file
    DB_USERNAME=root
    DB_PASSWORD=

    SALT_PSEUDO_NONCE=use "bundle exec rake secret"

Initialize the database

    rails db:migrate

And finally start Nginx

    sudo /opt/nginx/sbin/nginx

And that is it! You now have a Standard File server.

I <3 BlackArch

For many, Kali Linux is the go to distribution for anything security related.  The amount of support it has from both the creators and community has resulted in a Swiss Army Knife for anyone from script kiddies to professional researchers. Kali is designed out of the box to just work, and it does work very well.  However, because it is so specialized it does not funciton great as a daily driver OS.  This specialization results in a lot of services and ports being enabled and open that are required for tools to work properly that wouldn't normally be enabled.  Also many of the tools require running as root to function properly. Finally, because things "just work," Kali requires use of it's repository for software.  Other repos can be added, but at your own peril.

So what if I want my security tools AND an OS that I can use as a daily driver? Enter BlackArch.

blackarch.png

While BlackArch can be installed same as Kali and is considered a Kali alternative, it can also be installed on top of an already running Arch installation. This is the configuration I run as my daily driver.  BlackArch allows for it's repositories to be added alongside upstream Arch.  I am then able to get the latest patches as soon as they hit upstream without having to wait for Kali to release their own version of the patch to their repo.  

To add the BlackArch repo to your existing install you'll need to grab strap.sh from BlackArch

curl -O https://blackarch.org/strap.sh
chmod +x strap.sh
sudo ./strap.sh

As always, review the code to see what it is doing, but essentially, all the script does is add the Blackarch repo mirror and keyring to you pacman config.  Once it is added, you can install individual packages OR categories of packages.  To check out the categories run:

sudo pacman -Sg | grep blackarch

Now, I could use VMs, or dual boot, or use a USB and while I do use a disposable Kali VM for engagements, I don't want to have to boot up a VM or reboot to mess around with security tools or do testing (read: Lazy).  I have found that this setup works really well for my use case.  

And thats it!  Hope this was helpful.  I would love to hear your thoughts.  Let us know what you think by hitting us up on twitter @BlkBearIS

5 Tips for End User Awareness

Perhaps one of the most overlooked risk mitigation techniques is strength in numbers. Those increased numbers are the people who may have the most impact on your network's overall security; the users. The common misconception amongst all sorts of Organizations is that user's are high-risk, low-impact when it comes to effectively increasing overall security in an Organization's environment. As a result, the network users own less accountability in their responsibility to protect Company assets. This creates an environment ready to be compromised by any willing threat-actor.

The users are the be all, see all. They're responsible for creating shadow IT environments. They're the ones ultimately clicking on the risky links. They're always involved in the systems that may present vulnerabilities at every level of operation. Users are truly the ones that need the education of properly handling incidents and risk avoidance techniques. Let's not confuse the term user with strictly end-users; System Administrators, IT Managers, Software Developers and other IT professions are also at fault. Despite industry push-back that EUAT is less than relevant, the importance of layered security begins with the users of the systems and data you're attempting to secure.

Here are five primary items that will assist you in creating your own culture of awareness regardless of the technological support.

1.Schedule of Events No matter what program is implemented, a standard schedule is often important in creating an environment for user's education to thrive. This method provides a structure of expectations each month, quarter, week or day; whatever communications frequency is appropriate for your teams. The schedule works well if you're able to tie it with a fiscal year based on a monthly newsletter. Predefine the delivery medium and stick to it. Users will respond more willingly if you are able to tie an incentive to the scheduled events such as an additional vacation day or security branded swag.

2.Methods of Delivery

In speaking about the methods of delivering your message, consider the physical locations of your users, the core working hours and their responsiveness to communication methods available. For example: If your marketing department typically distributes a monthly newsletter via magazine or paper, attempt to piggyback on a similar communication. Users that respond to handouts before e-mails will typically spend more average time reading a paper-based pamphlet. EUAT programs can be tailored to fit this type of medium easily and the expense is worth the impact.

This is not to say that e-mail communications should be disregarded entirely. Supplementing with e-mail when a paper brochure is used provides a searchable, indexed reference that uses can recall easily. Regardless of your primary medium, adding an e-mail communication with an attached awareness topic is always warranted.

It's important to note that a thorough end-user awareness program should be a joint effort between InfoSec, IT, HR and Corporate Training. In the event that not all departments are available to create and deploy such an EUAT, there are many 3rd party resources that can help outside of your respective Org. 

3.Measuring your success

EUAT programs can be measured based on failure rates. Utilizing the negative impact way of thinking, the steps involved in testing the awareness of a person's or group of people's knowledge of Information Security focus primarily on the response from that particular group. Prior to assuming an appropriate response, it's usually a good idea to train the users in advance. The timing of the training versus the testing is up to you. Once the testing is complete, you should be able to compile a list of failures and successes. Failures will point to those that clicked on a malicious link that has all the key indicators from the training previously discussed. The result in a failure rate is possible re-training, re-communicating the initiative to either the single failure or to the entire group of previously trained individuals. This will allow a complete picture of where the training failed and where user's require additional help in identifying Security compromises.

4.Marketing Meets InfoSec

A breif tie in to the previously mentioned measurements of success; IT in general can benefit from a marketing mentality standpoint towards end-user-awareness-training. EUAT is most effective when it's focus relies heavily on the tendencies of human nature. The truth is, we all have clicked on a bad link at some point in our lives but the key to awareness is the attempt to change that mentality. When an EUAT program ties into a marketing and communication structure, all users (including IT) will respond more thoroughly given a more complete understanding of what they need to learn. Marketing techniques provide the InfoSec product to reach all types of mindsets and can easily be integrated into your EUAT program.

Self-less Plug. You are reading our Company Blog, aren't you? :)

5.When to Call in the Big Guns

By now you're probably wondering "Where would I find the time to put together all of these items and redo every few months"? Here's where an outside consultant can provide value to your program. Companies like Black Bear Information Security, LLC. will talk with your security teams, IT Managers, Directors and C-Level personnel to collectively assist in building an ad-hoc End User Awareness Training Program for your Organization. Not only will this service provide a means and methods to ensure your greatest asset (your people) are more informed, but will improve the Company's poster regarding overall safety, security and well being.

Troubleshooting the Wifi Pineapple

I've discovered a problem that occurs to when setting up my Wifi Pineapple using the wp6.sh script. For whatever reason, the commands do
not run properly and no settings are changed. In lieu of the wp6.sh script, you can do the following to get your Wifi Pineapple working properly every time.

Note: 
Where $eth1ipaddr is the IP address of the interface connected to the Wifi Pineapple. The Wifi Pineapple runs a DHCP server but assumes that your computer will get an IP address of 172.16.42.42This is not always the case, so double check.

1) Make sure that forwarding is enabled.

echo '1' > /proc/sys/net/ipv4/ip_forward  

2) Create the iptables rules to share the network connection with the Wifi Pineapple.

iptables -F  
iptables -A FORWARD -i $eth1ipaddr -o 172.16.42.1 -s 172.16.42.0/24 -m state --state NEW -j ACCEPT  
iptables -A FORWARD -m state --state ESTABLISHED,RELATED -j ACCEPT`<br>  
iptables -A POSTROUTING -t nat -j MASQUERADE  

3) Finally, ssh into the Wifi Pineapple and set the default route to the correct IP address.

route del default  
route add default gw $eth1ipaddr  

 

 

SECURE IT - Amazon Echo/Alexa

As part of our InfoSec Industry duties in maintaining a positive image to society (I'm not really sure that's a thing but it sounded nice), I'll attempt to bring you a regularly scheduled "SECURE IT" blog entry. In hopes that I have more than two readers, I will also accept requests for the following week's "SECURE IT" blog. So far, my cat asks specifically for an entry on "more treats". This has nothing to do with InfoSec. I'll place that request on the back burner for now.

If you don't know at least one person who has an Amazon Echo, Alexa, version 1 or 2, then you likely haven't left the house recently. This device has become a basis for voice recognition appliances in a majority of homes at roughly 69% market share (versus Google Home 25%, 6% Other- Dec 2017). The Echo, Echo Dot, Echo Show, etc have become a staple for game nights, trivia, music streaming and even online ordering directly from your connected Amazon.com account.

There's no wonder that the Amazon Echo has InfoSec professionals and other like-minded paranoids rather uneasy about the whole thing. As such, there are a handful of methods to put worrying minds at ease, and several others to reverse said ease rather quickly.

SECURE IT NOW

1. Disable the built-in microphone

So you're probably asking yourself as you read this "But...why?!". Well, although the Amazon Echo and related products main function is provide voice assisted tasks and integration of those voice commands to other network connected devices, it is possible to disable the function for fear of eavesdropping on everyday conversations. As such, the Echo is equipped with a microphone on/off switch.

Next time you decide to verbally communicate your passwords or social security numbers to someone inside your home, tap the mic off.

2. Isolate!

No, I'm not talking about what you practice during a long Western PA winter that is seemingly endless, even after a weekend of sun and 80 degree weather, it returns with snow and cold. Isolation refers to a network segmentation technique, typically over wireless. I would recommend combining isolation and true segmentation if your Amazon Echo does not need to communicate with any other device on your network other than the Internet for queries such as "WHEN WILL THIS WINTER END?".

This can be accomplished via settings in your router or combination of router and managed network switch.

3. Disable Voice Purchasing

For all the parent's out there, this is a must do as soon as the Amazon Echo is connected to the Internet. Otherwise, you may find several hundred orders of toys, juice boxes and other kid-friendly paraphernalia randomly show up at your door. In one case, a threat-actor was able to voice command purchase items through an open window and return to pick up their prize when the home owner's were away on vacation.

You may also choose to utilize voice purchasing but require a PIN during the process. Rotate the PIN if you don't want passerby's eavesdropping on the PIN for your eavesdropping device. Don't use the same PIN as your back account.

4. Patch Management

Just like any other computing device, the Amazon Echo has it's flaws, updates, fixes and new releases. As such, the Echo is vulnerable to much of the same issues as a mobile device or laptop computer; things like bluetooth stack vulnerabilities, linux kernel remote-code execution and information disclosure in the SDP server and so on and so forth. It's important to ensure that your Amazon Echo is running the latest patched code to mitigate risks related to common system security issues. It is a computer by the way!

A majority of Amazon Echo's security vulnerabilities are effective only when physical access is gained by the threat-actor or tester. With that being said, more and more businesses are utilizing the Echo such as an option in a hotel room. There's no telling how many malicious actors have compromised the Echo's in public locations especially. Once the device is compromised, it may be used to traverse other areas of the network or eavesdrop on your private conversations.

I hope this was at least somewhat helpful in determining how to SECURE IT - Amazon Echo/Alexa. Take care.

DISCLAIMER: The following article lists information as provided by an unbiased 3rd party consulting Company and is in no way, shape, or form sanction by Amazon, Inc.

 

Coreboot on the ThinkPad X220

Coreboot_full_highres.png

 

Coreboot is an open source project to replace proprietary BIOS/UEFI with a lightweight and secure boot firmware. It also allows you to remove most of Intel's Managemnet Engine without any ill effects. I was in the market for a new laptop at the time. My daily driver had been a Macbook Pro (Early 2015) and it was growing long in the tooth. I decided to add Coreboot compatibility to my list of requirements (in addition to upgradable components and an external battery).

I found my perfect fit in a Lenovo ThinkPad X220. It was older but for under $400 I was able to get an i7 processor with 16GB of RAM and a 256GB SSD. You can take is apart with a Phillips head, pretty much all the components are upgradeable, there is a surplus of spare parts available and it is Coreboot compatible. Best $400 bucks I ever spent. Once it had arrived, I blocked off a weeknight and got to work.

There are two great guides on flashing the X220 by Karl Cordes and Tyler Cipriani, I will link them below as they served as my guides in this process. That being said, they are a bit confusing at points and there are new issues they aren't addressed in the guides because, as of this writing, they are about a year and a half old. So this will serve as my updated guide to flashing Coreboot on a ThinkPad X220.

Required Hardware/Software:

  • Lenovo ThinkPad X220 - duh
  • Raspberry Pi Running Raspian - Model 2 or 3
  • Pomona SOIC8 5250 chip clip
  • 8x Female Jumper Leads
  • Another Computer to SSH into the Raspberry Pi from
  • A Debian Server - Used for building Coreboot (I spun one up in DigitalOcean)

The Process

1) Configure SD Card with Raspian Lite image

2) Insert SD Card and connect Pi to network. I used an ethernet connection but you can use PiBakery to autoconnect to your wifi if you have a Raspberry Pi 3

3) Find the Raspberry Pi on your network and SSH into it. Check your DHCP server on your router.

ssh pi@$piipaddress  

4) Flashrom is a tool used to read the data from the BIOS chip. We will need to use the latest version but before we can do that we need to install some prerequisites first.

sudo apt update && sudo apt upgrade  
sudo apt install build-essential pciutils usbutils libpci-dev libusb-dev libftdi1 libftdi-dev zlib1g-dev subversion git  

5) Once everything is up to date and installed, download and compile Flashrom

git clone  https://review.coreboot.org/flashrom.git  
cd flashrom  
make  
sudo make install  

6) Now we need to enable the SPI kernel modules to /etc/modules so they persist between reboots. Once this is done. Shutdown the Pi and remove the power cable.

  sudo nano /etc/modules
    spi_bcm2835
    spidev

Alternatively: You can load them every boot manually

sudo modprobe spi_bcm2835  
sudo modprobe spidev  

7) We need to connect the Pi to the BIOS chip on the X220 which is located underneath the palmrest. Follow this Lenovo Support guide on removing the keyboard palmrest.

8) The BIOS chip is located underneath the black shielding by the PCMCIA slot. Attach the chip clip.

9) Connect the chip clip to the Raspberry Pi using the following diagram. (Courtsey of Karl Cordes)

Pamona 5250

Screen (furthest from you)  
         __
MOSI  5 --|  |-- 4  GND  
CLK   6 --|  |-- 3  N/C  
N/C   7 --|  |-- 2  MISO  
VCC   8 --|  |-- 1  CS

Edge (closest to you)  

Raspberry Pi

Edge of pi (furthest from you)  
L                                                             CS  
E                                                             |  
F +--------------------------------------------------------------------------------------------------------+  
T |    x    x    x    x    x    x    x    x    x    x    x    x    x    x    x    x    x    x    x    x    |  
  |    x    x    x    x    x    x    x    x    x    x    x    x    x    x    x    x    x    x    x    x    |
E +--------------------------------------------^----^----^----^---------------------------------------^----+  
D                                              |    |    |    |                                       |  
G                                             3.3V  MOSI MISO |                                      GND  
E                                           (VCC)            CLK  
  Body of Pi (closest to you)

10) With the clip connected, power on the Pi and SSH back into it. We should now be able to read the contents of the ROM with Flashrom. If Flashrom is not able to read the ROM, check your connections and try reseating the chip clip. Those are the two most common problems.

sudo flashrom -p linux_spi:/dev/spidev0.0,spispeed=512 -r oem01.bin`  

11) If the first read was successful, read the ROM again and compare the two reads to ensure everything is working correctly.

sudo flashrom -p linux_spi:/dev/spidev0.0,spispeed=512 -r oem02.bin  
md5sum oem01.bin oem02.bin  

12) Now that we have a known good copy of the ROM, we need to download and compile Coreboot. You can try to compile Coreboot on the Pi but in my experience that either takes too long OR fails entirely. This is where that Debian server comes into play. Transfer oem01.bin to the Debian server. We will need this for the Coreboot compilation.

Note: There is also a docker image available for building coreboot. I have not tried it yet though.  

13) On the Debian server, install the prerequisites for building Coreboot and me_cleaner (more on that later)

sudo apt install git build-essential gnat flex bison libncurses5-dev wget zlib1g-dev python3  

14) Clone the Coreboot source

git clone http://review.coreboot.org/coreboot.git  
~/coreboot
cd ~/coreboot  
git submodule update --init --recursive  
cd ~/coreboot/3rdparty  
git clone http://review.coreboot.org/p/blobs.git  

15) Build ifdtool. This is used to dump the Intel firmware from the ROM.

cd ~/coreboot/util/ifdtool  
make  
sudo make install  

16) Extract the individual components of the ROM for Coreboot to reference

cd ~  
ifdtool -x ~/oem01.bin  
mkdir -p ~/coreboot/3rdparty/blobs/mainboard/lenovo/x220  
cd ~/coreboot/3rdparty/blobs/mainboard/lenovo/x220  
cp ~/flashregion_0_flashdescriptor.bin descriptor.bin  
cp ~/flashregion_2_intel_me.bin me.bin  
cp ~/flashregion_3_gbe.bin gbe.bin  

17) With all the pieces in place, now we need to modify the Intel Management Engine to only run during the boot operation. This is done with me_cleaner.

cd ~  
git clone https://github.com/corna/me_cleaner ~/me_cleaner  
cd ~/me_cleaner  
./me_cleaner.py ~/coreboot/3rdparty/blobs/mainboard/lenovo/x220/me.bin

18) Now we can configure Coreboot with all the settings we want before compilation.

cd ~/coreboot  
make nconfig  

19) Use the following settings for Coreboot. (Courtesy of Tyler Cipriani)

general  
  - [*] Compress ramstage with LZMA
  - [*] Include coreboot .config file into the ROM image
  - [*] Allow use of binary-only repository
mainboard  
  -  Mainboard vendor (Lenovo)
  -  Mainboard model (ThinkPad X220)
  -  ROM chip size (8192 KB (8 MB))
  -  (0x100000) Size of CBFS filesystem in ROM
chipset  
  - [*] Enable VMX for virtualization
  -  Include CPU microcode in CBFS (Generate from tree)
  -  Flash ROM locking on S3 resume (Don't lock ROM sections on S3 resume)
  - [*] Add Intel descriptor.bin file
    (3rdparty/blobs/mainboard/$(MAINBOARDDIR)/descriptor.bin) Path and filename of the descriptor.bin file
  - [*] Add Intel ME/TXE firmware
    (3rdparty/blobs/mainboard/$(MAINBOARDDIR)/me.bin) Path to management engine firmware
  - [*] Add gigabit ethernet firmware
    (3rdparty/blobs/mainboard/$(MAINBOARDDIR)/gbe.bin) Path to gigabit ethernet firmware
devices  
  - [*] Use native graphics initialization
display  
  - (nothing checked)
generic drivers  
  - [*] Support Intel PCI-e WiFi adapters
  - [*] PS/2 keyboard init
console  
  - [*] Squelch AP CPUs from early console.
    [*] Show POST codes on the debug console
system tables  
  - [*] Generate SMBIOS tables
payload  
  - Add a payload (SeaBIOS)
  - SeaBIOS version (master)
  - (3000) PS/2 keyboard controller initialization timeout (milliseconds)
  - [*] Harware init during option ROM execution
  - [*] Include generated option rom that implements legacy VGA BIOS compatibility
  - [*] Use LZMA compression for payloads
debugging  
  - (nothing checked)

20) Finally we can build Coreboot. Coreboot will put the ROM to ~/coreboot/build/coreboot.rom

cd ~/coreboot  
make crossgcc-i386 CPUs=4  
make iasl  
make  

21) Transfer coreboot.rom to your Raspberry Pi. Repeat steps 10 and 11 to confirm that the chip clip is connected properly and then go ahead and flash the ROM.

sudo flashrom -p linux_spi:/dev/spidev0.0,spispeed=512 -w ~/coreboot.rom  

22) Test to make sure that the ROM was flashed correctly.

sudo flashrom -p linux_spi:/dev/spidev0.0,spispeed=512 -r core01.bin  
sudo flashrom -p linux_spi:/dev/spidev0.0,spispeed=512 -r core02.bin  
md5sum core01.bin core02.bin  

23) Reassemble your X220 and boot. If you were successful you should be greated by SeaBIOS. If not, try again. If all else fails, revert to your old BIOS by flashing oem01.bin. Hang on to oem01.bin even if you did flash successfully. It will be helpful if you ever want to revert back to your old BIOS again.

References
Tyler Cipriani's Write Up
Karl Cordes Blog