September 30, 2022

Where You Decrypt Has A Big Effect on Network Data

 

It’s a common truth that it’s not as important as what you say or do, but how you say or do it. This holds true for network data decryption as well. Many businesses today decrypt and inspect network data. And for good reason. It’s been estimated that 70% of malicious traffic is now embedded within encrypted traffic.


However, one thing that you don’t hear a lot about is where to decrypt that data. For instance, researchers at Enterprise Management Associates (EMA) found in their 2022 report (Network Visibility Architecture for the Hybrid, Multi-Cloud Enterprise) that 43% of study participants decrypted traffic on each analysis tool, just prior to inspection. While this may be a perfectly valid thing to do, I would submit to you that it probably is NOT the best.


Consider this there are two fundamental locations to perform data decryption:

  • At each security tool

  • One centralized location

Since most companies with an interest in security have multiple security analysis tools, they often purchase the decryption for each tool. This strategy can create three problems:

  1. Non-standard decryption algorithms across tool manufacturers can leave you without the decryption capability you need when malware appears.

  2. Wasted CPU performance, as each tool must decrypt/encrypt the same traffic again and again. Decryption at every tool can slow your network and increase the odds that decryption is disabled. ZK Research discovered in one of their surveys that when decryption slows the network down to a crawl, 45% of security engineers just turn it off leaving them with no decryption.

  3. Runaway costs from growing tool requirements can entice some to take shortcuts in the visibility architecture through spot monitoring or using SPAN ports instead of dedicated hardware tapping devices (which might not meet compliance or visibility requirements).

The alternative to decryption on every tool would be to decrypt once at a central hub. An example would be as part of a network visibility architecture where you have a network packet broker that can perform the data decryption and re-encryption. Once that packet broker decrypts the data, it can easily pass multiple copies to security tools in parallel or it can pass data serially from one tool to another, per your architecture requirements. Once the data is completely examined and the good data returned to the packet broker, it encrypts the data and sends it on into the network.


EMA researchers found that 25% of businesses decrypted data using a network visibility architecture. This strategy created the following positive outcomes:

  1. Aggregation of traffic, maximizing tool efficiency with faster FPGA processing, and sending the right traffic to the right tool

  2. Load balancing data sent to tools, maximizing tool farm efficiency

  3. Allows for a “decrypt one time to analyze all data” strategy, which greatly improves your success in detecting malware, and eliminates individual tool decryption license fees

Decryption and encryption are resource hungry activities that are best done once, in your network visibility architecture, on hardware built for the purpose of maximizing the efficiency of your analysis tools. It should not be done repetitively at each security analysis tool.


Where you decrypt matters. Decrypt once, in your network visibility architecture, and maximize the benefits of your analysis tools.


Whether you are looking to reduce costs, meet compliance, or enhance your security posture Keysight is here to help. We have various network visibility and network security solutions for both NIST and CISA compliance. Reach out to Keysight Technologies and we can show you how to optimize your security solutions.


For additional information about why Where You Decrypt Network Data Matters, download the brief.

September 29, 2022

Modern IT Architecture - Trends & Challenges

 

Network architects are moving into hybrid environments, scalable technologies, and cloud networks. But when an organization needs to upgrade its networks, add new security or monitoring tools, or move to the cloud, they face a number of security and performance challenges.


As covered in earlier blogs, a strong foundation of visibility is needed for your network in order for it to expand with your company without increasing threats and performance issues. When migrating to the cloud or preparing your network architecture for future growth, visibility fabrics and deep observability pipelines should be taken into account early on. They are essential for maintaining consistent and complete visibility throughout a hybrid environment.


Alastair Hartrup, CEO at Network Critical says, “applying visibility at the network packet level can help keep your budget in line without compromising the protection provided by security appliances. They can also provide the scale necessary to grow without going off-budget. In both budgeting and design, diligent planning and disciplined execution can save, not cost.”


Network Critical’s SmartNA™ range of hybrid TAPs & Packet Brokers can provide features like aggregation, filtering, and load-balancing of all data in real time. Simplifying network management by getting the right data into to correct tool, improving performance, and reducing costs.


The SmarNA™ range (1/10/40/100G & 400G) not only covers the fundamental packet capture and filtering but also meets the growing demand for hybrid environments. Many packet manipulation functionalities, including stripping, slicing, and masking the data, are supported by Network Critical's products in order to comply with privacy regulations such as HIPPA, SOX, and GDPR in the EU. Additionally, SmartNA-XL™ features GRE encapsulation to monitor your multi-site networks from a centralized location.


To learn more about IT architecture trends and how to stay ahead of this ever-changing technology, contact the team of experts at networkcritical.com/contact-us.

September 26, 2022

NMAP Subnet Scan

 I had to perform a subnet scan for a client and unfortunately, they did not have any tools, so I suggested using NMAP (www.nmap.org).

For those of you who are unfamiliar with NMAP, you can perform a subnet scan using any of the 3 following options, subnet/mask, IP address range, IP address and * wildcard. For example, on my network it would look like something like this; 10.44.10.0/24 or 10.44.10.1-254 or 10.44.10.*

As I was performing the scan, I was explaining that you should always ‘know your tool’ by simply performing a packet capture. I went on to say that all you have to do is start, stop and save your capture with a descriptive name. So even if you did not have time to go through it now, or go through it thoroughly, its there for future reference.

In this video I should you some of the NMAP behavior we spotted. First thing we noticed was that NMAP performed a discovery using an ARP scan, then it used DNS reverse name lookup to determine the host names. This is where we go down a bit of a rabbit hole. I noticed that my computer was communicating with the correct DNS servers, but then went off and communicated with 2 other IP addresses.

In the video I show you how I figured it out and then how NMAP used the same TCP return port number for its port scans.



September 21, 2022

How Ease of Use Impacts Network Visibility

 

A fundamental question for network visibility solutions almost always involves the following, “How can you improve the short term and long-term operating costs for your monitoring solution?” Fortunately for all of us, Tim The OldCommGuy™ O’Neil, has shared the answer in one of his whitepapers – The Technical and Financial Impact of Ease of Use on Network Visibility Solutions.


The answer to the question above involves two fundamental steps:

· Update your monitoring processes to the best technology

· Optimize your solutions to take full advantage of ease-of-use functionality


When it comes to taking advantage of the best technology, some examples include taps and network packet brokers (NPBs). For instance, taps are a better choice for data collection than SPAN ports. Tim covers the reasons in detail in his white paper, but the basic gist is that taps make a complete copy of all the data (good and bad). When using SPAN ports, it is literally hard to tell exactly what you have. Data packets could be missing for a multitude of reasons, and you won’t realize that the SPAN port didn’t provide important data.


In addition, you’ll want to add a network packet broker to optimize your filtering methodology and related filter programming costs. Your security and monitoring tools don’t need, and don’t want, to see EVERY packet. They just want the relevant packets as quickly as possible. Well-designed NPBs allow you to aggregate, deduplicate, filter, and regenerate the data you need (at line rates of 40, 100 and up to 400 Gbps) to send the right data to the right tool at the right time.


The second step is to optimize the ease-of-use benefit. Ease of use includes installation, training, and day to day programming complexity. According to Tim, using a graphical user interface (GUI), can cut your long-term operating costs by 75% or more. This is because a GUI creates higher productivity than a command line interface (CLI) or a menu driven interface.


Another important question is whether the device be operated effectively by most personnel without training and retraining? “Usability” is the key factor that allows organizations to use network equipment with ease; and still be assured that they are getting a true, reliable, and repeatable view of their traffic and network operations.


The combination of both steps above will allow you to effectively reduce your TCO and reuse the extra money to solve additional needs that you have. Better data is critical to capturing security threats and reducing troubleshooting / forensic analysis costs.


Check out Tim’s white paper for more information on these

Popular post in the past 30 days