PCI: Listed vs. Non-Listed P2PE Solutions


With credit card processing, it is important to understand the different parts and pieces that make-up the required security components. The PCI Security Standards Council (PCI-SSC) has released an assessment methodology for merchants using Point-to-Point Encryption (P2PE) solutions. The addition of the Non-Listed Encryption Solution Assessment (NESA) and the accompanying audit process provides merchants an expanded pool of encryption solutions beyond the current list of validated providers, allowing for a wider range of security offerings of the merchant is willing to accept the added liability and cost of a self-certified solution. It is very important to understand the assessment requirements of each before deciding between a listed or a non-listed solution.

The process for becoming a listed solution with the PCI-SSC begins with an audit performed by a third party independent Qualified Security Assessor (QSA) who has been certified specifically for P2PE assessments. During this technical assessment, the P2PE QSA will evaluate the solution against the relevant controls outlined in the following six P2PE Domains from the PCI-SSC:

  • Domain 1: Encryption Device and Application Management
  • Domain 2: Application Security
  • Domain 3: P2PE Solution Management
  • Domain 4: Merchant Managed Solutions (not applicable to 3rd party solution providers)
  • Domain 5: Decryption Environment
  • Domain 6: P2PE Cryptographic Key Operations and Device Management

For each applicable control, the P2PE QSA will collect evidence from the solution and observe all required procedures to ensure compliance with the standard. The results of the assessment are then documented using the P2PE Report on Validation (P-ROV) template which will be submitted by the QSA directly to the PCI-SSC for final review. Once a representative of the PCI-SSC has reviewed, approved, and signed the submitted P-ROV, the solution will receive an official listing on the PCI website.


The process of implementing and validating a new or existing solution can be quite lengthy and the NESA process gives solution providers the ability to provide a degree of security assurance to customers, along with scope reduction, while they work towards a validated listing. Much like the process for becoming a listed solution, non-listed solution providers need to engage their own P2PE QSA to perform an assessment of their solution. The requirements for this type of assessment, however, have been relaxed in that a non-listed solution assessment can be completed without meeting the requirements for P2PE Domains 1, 2, or 3, but must meet all applicable requirements of Domains 5 and 6. Though the QSA will still complete a P-ROV for informational purposes, the end result of this assessment will also include a set of documents (referred to as the NESA documentation) which will include:

  • A description of the solution
  • A summary of the application’s full compliance, partial compliance, or non-compliance with Domains 1,2, and 3
  • A statement of compliance confirming the applicable requirements of Domains 5 and 6 are met
  • The assessing P2PE QSA’s recommendation as to how the solution impacts the merchants PCI scope

This set of documents serves the same purpose as a listed solution’s P-ROV or Attestation of Validation (AOV), without being submitted to the PCI Council or the Payment Brands, and will be used by PCI QSA’s when assessing the PCI compliance of a merchant utilizing the non-listed solution. As with standard PCI certification documentation, this NESA documentation should be distributed to clients on an annual basis, and whenever there are significant changes to the system.

At the merchant level, the difference between implementing a listed versus a non-listed solution becomes apparent during the annual PCI-DSS re-certification. A merchant using a listed solution in accordance with the solution providers P2PE Instruction Manual (PIM) and the pre-requisites of the SAQ P2PE automatically qualifies for a drastic reduction in PCI scope when assessing their environment. This is because the security and isolation of credit card data has been verified by a representative of the PCI-SSC. This same level of scope reduction is not guaranteed with a non-listed solution and will really depend on what is permitted by the merchant’s acquirer as well as the payment brands. In some cases, the acquirer or payment brands may require the aid of a PCI QSA to review the solution provider’s NESA documentation and the merchant’s implementation of the solution to determine what PCI-DSS requirements are covered, and to what degree. The results of this secondary solution assessment will determine which areas of the merchant environment are in scope of PCI, but will not qualify the merchant to utilize the SAQ P2PE.

You can get more information from the official PCI Security Standards Council website or your QSA.

Selecting a PCI QSA Vendor


If you are a merchant that accepts credit cards, you are required to comply with the requirements of the Payment Card Industry (PCI) Data Security Standards (DSS), and you must demonstrate that compliance each calendar year to your bank.

You can find more information about what that means to your business here. Once you are ready start your compliance effort, you will need to engage a third-party team to help make sure you are making the correct decisions about demonstrating that compliance, working on changes as they occur to make sure you aren’t making poor security decisions, and they will certify your compliance with a standard format report that lists why they think your environment is secure enough for customer transactions called a Report of Compliance (RoC).


It can be difficult to find the correct partner that can help guide you through this difficult and expensive process, but a little work in the beginning can save you headaches and expenses later in the process.

The first thing to remember is you need to run your business, and that already means having a secure information technology (IT) environment and worrying about adequate security. Being PCI compliant an having a secure network is not the same thing. The PCI DSS provides you a standard framework that list the minimum requirements for implementing a secure IT environment. You have to know your specific requirements and what makes your environment different than the “standard” IT environment. Your Qualified Security Assessor (QSA) needs to have the skills required to understand technology in general, and your specific environment, before thy can perform their job well.


Here are three tips to help you choose the right QSA for your PCI Compliance audit:

  • How experienced is the QSA? You don’t want to pay to train your QSA on how to do perform their job. You want someone who already has a few years of experience.
  • What is the approach to the audit used by the QSA? You have to understand how the QSA will interact with your team and how well their approach will work with your culture and corporate environment. The best-case scenario is they fit seamlessly with your employees and have the professionalism and communication skills to work well with the entire team.
  • Will the QSA stand behind their work? If things go bad and there are questions about your security measures, or even a breach of customer data, you want someone who is confident in their security decisions and will defend your decisions.

Pricing is another area that can be hard to compare from company to company. I looked at 5 different companies and they gave me 5 very different pricing solutions. The cheapest was almost half the cost of the most expensive.  There are several factors that will play into pricing and those factors will vary for each company, but your overall network complexity, number of credit card transactions, and requested services can really drive huge swings in pricing. Don’t be afraid to do some comparison shopping to find a vendor with pricing that meets your budget, but the cheapest vendor doesn’t always mean they are the one you need to select.

It can be preferred to pay slightly more if you are going to get much better service and support.


Cover Your Laptop’s Webcam

USB Hacks - @SeniorDBA

You may have seen several people covering their laptop webcams, including government officials and a prominent high-profile CEO or two. This may have you asking why they would choose to cover their webcam, and if you should be doing the same thing.

Webcam - SeniorDBA

Hackers want to access any high-profile system, and video taken from a webcam can easily be used for blackmail. Imagine the type of data you might be able to capture from a high-profile CEO, showing him or her working or conversations recorded without them knowing. Hackers can easily generate the most profit if they can capture video or audio to use as blackmail.

While it is unlikely they they would attack your laptop, you could still be a target it you have access to sensitive data or if your recorded activity can be used to gain access to other systems or devices.

Currently, the only way for a hacker to access your webcam is for them to gain access to your computer, which makes the attack similar to any other type of remote attack. You might receive an email with an attachment that secretly installs a Remote Administration Tool, or you might respond to a social engineering attack that convinces you to surrender control via a fake IT support call. Your laptop could be compromised and you wouldn’t even know they have taken control of your webcam, because they can disable the webcam activity LED.

Best Practice Recommendations

  • Keep the webcam lens (usually located at the top center of the laptop screen) covered, with a piece of opaque sticky tape except when actively being used.
  • Keep your laptop closed when it isn’t actively being used.
  • Always your software up to date, especially your web browsers and all associated plug-ins.
  • Enable your firewall at all times.
  • Always run anti-virus and routinely check for malware.
  • Avoid clicking links in emails, even when you know the sender.
  • If you get an email telling you your email account has been compromised or someone needs to verify your security setting, don’t click the link in the email. Contact the site directly.
  • If you get a call from IT asking for access to your computer. Refuse them access and call your internal help desk directly. Ask questions and verify their identity before you allow any remote access.

Making a Kali Bootable USB Drive on your Mac

Kali Linux - SeniorDA

Many people want to run a new version of Linux without the need for a new computer. The easiest way, and probably the fastest, is running Kali Linux (this actually works the same way with most distributions) is to run it from a USB drive without installing it to your internal hard drive.

This simple method has several advantages:

  • It’s fast – Once you have the distribution installed on a bootable USB drive, you can boot to the login screen in just a few seconds, vs. installing and configuring the files on your internal hard drive.
  • It’s reversible — since this method doesn’t change any of your files on your internal drive or installed OS, you simply remove the Kali USB drive and reboot the system to get back to your original OS.
  • It’s portable — you can carry the Linux USB with you at all times so you can use it on most systems in just a few seconds.
  • It’s optionally persistent — you can decide to configure your Kali Linux USB drive to have persistent storage, so your data and configuration changes are saved across reboots

In order to do this, we first need to create a bootable USB drive which has been set up from an ISO image of Kali Linux.

What You’ll Need

  1. A verified copy of the appropriate ISO image of the latest Kali build image for the target system. You’ll probably select the 64-bit version in most cases.
  2. In OS X, you will use the dd command, which is already pre-installed on your Mac.
  3. A 4GB or larger USB thumb drive.

Creating a Bootable Kali USB Drive on OS X

OS X is based on UNIX, so creating a bootable Kali Linux USB drive in an OS X environment is similar to doing it on Linux. Once you’ve downloaded and verified your chosen Kali ISO file, you use dd to copy it over to your USB stick.

WARNING: You can overwrite your internal hard drive if you do this wrong. Although this process is very easy, you should be very careful to follow the instructions.
  1. Without the target USB drive plugged into your system, open a Terminal window, and type the command diskutil list at the command prompt.
  2. This will display the device paths (look for the part that reads /dev/disk0, /dev/disk1, etc.) of the disks mounted on your system, along with information on the partitions on each of the disks.
    Kali Linux - SeniorDBA
  3. Plug in your USB device to your Mac in any open USB port, wait a few seconds, and run the command diskutil list a second time. Your USB drive will now appear in the listing and the path will most likely be the last one shown. In any case, it will be one which wasn’t present before. In this example, you can see that there is now a /dev/disk6 which wasn’t previously present.Kali Linux - SeniorDBA
  4. Unmount the drive (assuming, for this example, the USB stick is /dev/disk6 — do not simply copy this, verify the correct path on your own system!):
diskutil unmount /dev/disk6
  1. Proceed to (carefully!) image the Kali ISO file on the USB device. The following command assumes that your USB drive is on the path /dev/disk6, and you’re in the same directory with your Kali Linux ISO, which is named “kali-linux-2016.2-amd64.iso”:
sudo dd if=kali-linux-2016.2-amd64.iso of=/dev/disk6 bs=1m

Note: Increasing the blocksize (bs) will speed up the write progress, but will also increase the chances of creating a bad USB stick. 

Imaging the USB drive can take a good amount of time, over half an hour is not unusual, as the sample output below shows. Be patient and wait for the command to finish.

The dd command provides no feedback until it’s completed, but if your drive has an access indicator, you’ll probably see it flickering from time to time. The time to dd the image across will depend on the speed of the system used, USB drive itself, and USB port it’s inserted into. Once dd has finished imaging the drive, it will output something that looks like this:

2911+1 records in
2911+1 records out
3053371392 bytes transferred in 2151.132182 secs (1419425 bytes/sec)

And that’s it! You can now boot into a Kali Live Installer environment using the USB device.

To boot from an alternate drive on an OS X system, bring up the boot menu by pressing the Option key immediately after powering on the device and select the drive you want to use.

Kali Linux - SeniorDBA

Good Luck!

netdata: Linux Real Time Performance Monitoring

netdata - SeniorDBA

A netdata should be installed on each of your Linux servers. It is the equivalent of a monitoring agent, as provided by all other monitoring solutions. netdata is a highly optimized Linux daemon providing real-time performance monitoring for Linux systems, Applications, SNMP devices, over the web!  It is useful to visualize the insights of what is happening right now on your systems and applications.

netdata- SeniorDBA

This is what you get:

  • Stunning interactive bootstrap dashboards
    mouse and touch friendly, in 2 themes: dark, light
  • Amazingly fast
    responds to all queries in less than 0.5 ms per metric, even on low-end hardware
  • Highly efficient
    collects thousands of metrics per server per second, with just 1% CPU utilization of a single core, a few MB or RAM and no disk I/O at all
  • Sophisticated alarming
    supports dynamic thresholds, hysteresis, alarm templates, multiple role-based notification methods (such as email, slack.com, pushover.net, pushbullet.com telegram.org, twilio.com)
  • Extensible
    you can monitor anything you can get a metric for, using its Plugin API (anything can be a netdata plugin, BASH, python, perl, node.js, java, Go, ruby, etc)
  • Embeddable
    it can run anywhere a Linux kernel runs (even IoT) and its charts can be embedded on your web pages too
  • Customizable
    custom dashboards can be built using simple HTML (no javascript necessary)
  • Zero configuration
    auto-detects everything, it can collect up to 5000 metrics per server out of the box
  • Zero dependencies
    it is even its own web server, for its static web files and its web API
  • Zero maintenance
    you just run it, it does the rest
  • scales to infinity
    requiring minimal central resources
  • back-ends supported
    can archive its metrics on graphite or opentsdb, in the same or lower detail (lower: to prevent it from congesting these servers due to the amount of data collected)

This is what it currently monitors (most with zero configuration):

  • CPU
    usage, interrupts, softirqs, frequency, total and per core
  • Memory
    RAM, swap and kernel memory usage, including KSM the kernel memory deduper
  • Disks
    per disk: I/O, operations, backlog, utilization, space
  • Network interfaces
    per interface: bandwidth, packets, errors, drops
  • IPv4 networking
    bandwidth, packets, errors, fragments, tcp: connections, packets, errors, handshake, udp: packets, errors, broadcast: bandwidth, packets, multicast: bandwidth, packets
  • IPv6 networking
    bandwidth, packets, errors, fragments, ECT, udp: packets, errors, udplite: packets, errors, broadcast: bandwidth, multicast: bandwidth, packets, icmp: messages, errors, echos, router, neighbor, MLDv2, group membership, break down by type
  • Interprocess Communication – IPC
    such as semaphores and semaphores arrays
  • netfilter / iptables Linux firewall
    connections, connection tracker events, errors
  • Linux DDoS protection
    SYNPROXY metrics
  • fping latencies
    for any number of hosts, showing latency, packets and packet loss
  • Processes
    running, blocked, forks, active
  • Entropy
    random numbers pool, using in cryptography
  • NFS file servers and clients
    NFS v2, v3, v4: I/O, cache, read ahead, RPC calls
  • Network QoS
    the only tool that visualizes network tc classes in realtime
  • Linux Control Groups
    containers: systemd, lxc, docker
  • Applications
    by grouping the process tree and reporting CPU, memory, disk reads, disk writes, swap, threads, pipes, sockets – per group
  • Users and User Groups resource usage
    by summarizing the process tree per user and group, reporting: CPU, memory, disk reads, disk writes, swap, threads, pipes, sockets
  • Apache and lighttpd web servers
    mod-status (v2.2, v2.4) and cache log statistics, for multiple servers
  • Nginx web servers
    stub-status, for multiple servers
  • Tomcat
    accesses, threads, free memory, volume
  • mySQL databases
    multiple servers, each showing: bandwidth, queries/s, handlers, locks, issues, tmp operations, connections, binlog metrics, threads, innodb metrics, and more
  • Postgres databases
    multiple servers, each showing: per database statistics (connections, tuples read – written – returned, transactions, locks), backend processes, indexes, tables, write ahead, background writer and more
  • Redis databases
    multiple servers, each showing: operations, hit rate, memory, keys, clients, slaves
  • memcached databases
    multiple servers, each showing: bandwidth, connections, items
  • ISC Bind name servers
    multiple servers, each showing: clients, requests, queries, updates, failures and several per view metrics
  • Postfix email servers
    message queue (entries, size)
  • exim email servers
    message queue (emails queued)
  • Dovecot POP3/IMAP servers
  • IPFS
    bandwidth, peers
  • Squid proxy servers
    multiple servers, each showing: clients bandwidth and requests, servers bandwidth and requests
  • Hardware sensors
    temperature, voltage, fans, power, humidity
  • NUT and APC UPSes
    load, charge, battery voltage, temperature, utility metrics, output metrics
    multiple instances, each reporting connections, requests, performance
  • hddtemp
    disk temperatures
  • SNMP devices
    can be monitored too (although you will need to configure these)

And you can extend it, by writing plugins that collect data from any source, using any computer language.

Best Hacking Tools Of 2017: ADBrute

Password- SeniorDBA

If you have an Active Directory environment, you want to make it as secure as possible. ADBrute allows you to test the security of your Active Directory users. When a users network account of a domain user expires or when the user account is locked due to incorrect login attempts, the domain administrator may reset the password to the default password based on company policy. If your users do not change their password after it has been reset by the administrator, it creates a major security hole in your security.

A malicious user could easily use the default password to login into the victim’s user accounts, delete, read and send mails or access other resources on the network.

ADBrute is simple to use:

  1. Run ADBrute.
  2. Enter the name of the domain controller and valid login credentials to connect to the Active Directory. The user can be any user on the domain.
  3. Click on Login and wait till the entire user list for your organization is populated from the AD.
  4. You can double click on a User to view additional information.
  5. Enter the default password for your organization and press the start button.
  6. Sit back until the program scans and enumerates users who use the default password.
  7. You can export both the lists, the entire user list as well as the weak user list to three different file formats, .csv, .txt and .xls.

You can get more information and download the tool here.

Hashcat Now Cracks 55-Character Passwords

Hashcat - SeniorDBA

Hashcat is a freely available password cracker. It can be used by security auditors to stress-test company passwords and by criminals to crack lists of stolen passwords. One of the biggest issues with this utility has been an inability to handle passwords in excess of 15 characters. The latest version can now handle passwords and phrases typically up to 55 characters in length.

The latest version of hashcat, released last month, is a significant update to the program. Jens Steube, lead developer, says the update is “the result of over 6 months of work, having modified 618,473 total lines of source code.”

What the new version of hashcat should show you is that size is no longer as important as it used to be – it’s what the user does with the characters that matters. Length is still important but more important is using a mix of characters, like numbers, special characters,  and punctuation symbols to make the process of password discovery too slow even for a determined hacker.

You can learn more and download the free program here.