As a part of the understanding on the basics of Cisco ASA firewalls, these are some of the commands used to configure Cisco ASA firewall in real scenario.
As a part of the understanding on the basics of Cisco ASA firewalls, these are some of the commands used to configure Cisco ASA firewall in real scenario.
A local business that I help out when they need a hand, calls me and explains that they've had internet performance issues since day one.
They upgraded their internet service to fibre, then their local computer guy blamed the wifi, so they put in a new wifi mesh solution, now the same computer guy is blaming their switch. My friend is getting suspicious and asked if I had a few minutes to come by and check things out and give a second opinion.
It took about 10 minutes for my network senses to start tingling ..
First red flag; no documentation
Second red flag; they have 3 wiring closets for a fairly small office with no idea what terminates to (see first point) or why there's 3 wiring closets.
Third red flag: no testing methodology; the IT reseller tells him to buy it, rip out the old gear, install it and see what happens. Rinse and repeat.
Fourth red flag: path panel cable IDs don't match the faceplates in the office or the actual cables behind the patch panel.
i started at the front desk computers (that were the source of the complaint) and showed them how to trace a cable, how to label, and some tips and tricks along the way.
I found an old 10/100 hub, yeah i said hub, that we swapped out with a gig switch.
i explained that a switch will help contain physical level errors compared to a hub.
After we traced the newly swapped switch connection back to the 'main switch' and i noticed the port was running at 100 Mbps, instead of 1 Gbps which can be related to cabling issues.
i told him that he needs to trace out all the connections from the switch to the computers to better understand where the cabling runs and if there are any more hubs on his network.
A few hours later i got an update. he found one more hub and a bunch of 'crazy cabling' that i need to see. Since its only 20 minutes from my house, i scooted over and holy cow...
There were about a half a dozen cables that were spliced like you see in the photo. some of the network cables were actually spliced to old phone cabling.
I brought my cabling tools; crimper, RJ45 connectors, toner, punch-down tool and labels. The client and his computer guy said they are familiar with how to terminate and tone cables so I left them my tools and will follow up in a few days.
I think its important to document or learn how to administer your equipment using the GUI and the CLI.
In this example, I use both methods to add and delete a user on a Ubiquiti EdgeSwitch.
There has been attention, maybe even intense attention, paid to the security of United States civilian and government computer hardware systems over the last several years. I’m referring to the various discussions about foreign “white box” technology, the Huawei controversy, etc. But what about software security? There’s been lots of talk about hackers, malware, and state-sponsored hacking groups. However, the software powering these systems presents an equally pressing, yet often overlooked, concern.
There are two fundamental security risks with most software products today:
1 An over reliance on open-source software (OSS)
2 Use of foreign software programmers and foreign software manufacturers
Let’s look at the first concern. OSS has become very popular these days. It can drive down product creation costs and improve time to market. However, software security and product integrity risks increase substantially with its use.
For example, one fundamental risk is that you are relying on others to adequately validate that the software is error free. You can obviously do your own extensive validation, but most people/companies don’t seem to want to do that. After all, if you’re going to put that much effort into the software, you would have written it yourself and own it, rather than working on something that your competitors and anyone else who wants to can have it for free. Therefore, many parts of the code verification process are left to the crowd to conduct. Since this is done for “free” by the community, the verification process can range from being done well to being done very poorly (and every level in between), which leads to software code instability and insecurity.
A prime example of this is the node.js library. According to a 2022 Dark Reading article, researchers at Johns Hopkins University reported that they found 180 different zero-day vulnerabilities that were spread across thousands of Node.js libraries. If you’re not familiar with Node.js, it’s a fairly well distributed set of libraries that were initially created in 2011. With what should have been a large amount of review over 11 years, 180 zero-day flaws is a lot of risk to discover, especially if you are a product manufacturer delivering software solutions to the military or other government departments.
Another example is the Log4Shell vulnerability, that was found in the Log4j library in 2021. The Apache Log4j is a popular Java library for logging error messages in applications. The vulnerability, originally published as CVE-2021-44228, ended up having three more related vulnerabilities. Again, just because a piece of software was reviewed by a group, doesn’t mean it’s safe.
Proponents for OSS will tell you this is the exception and not the rule. They repeatedly state that the community reviews the code to catch problems. While this may be happening, something appears to be very, very wrong. A Synopsis 2024 Open Source Security and Risk Analysis Report found the contrary to be the norm. The report found that 84% of the codebases they assessed for risk contained vulnerabilities. Furthermore, 74% of those codebases vulnerabilities were high-risk issues. If communities are reviewing OSS as extensively as OSS proponents claim, it doesn’t look like they’re doing too good of a job.
What happens to the code after a year, two years, and more. Does anyone go back and update it to eliminate (or at least reduce) software vulnerabilities? While there are some examples of this, the Synopsis report found that 91% of codebases contain components that did not have any new development updates in over two years. The report did show that the number improved by 2 points (dropping to 89%) for code that was 4 years or so out of date.
What about all of the other open-source libraries being used? Not only could there be a lot of accidental “ticking timebombs” out there, but there could also be zero-day flaws discovered by bad actors (especially some foreign governments) that are deliberately not reported so that the bad actors can use those flaws at a later date for nefarious purposes.
So, while every company has a different tolerance for security risk, relying on other companies to do the security analysis and vetting of OSS might not be such a safe bet for you, or your customers (who will probably come after you if they are breached because of your product vulnerabilities). One of the best ways to avoid the situation is to buy software solutions that do not heavily rely on OSS. While a usage of 0% OSS is technically possible, you will be hard pressed to find a manufacturer that does not use any OSS.
A reduced OSS dependency plan gives you two clear benefits. First, you have a substantial possibility of reducing your security risk by not using potentially compromised software. Second, bad actors are generally less inclined to spend time trying to attack proprietary code. There is little in it for them. It takes a lot of time to analyze the code for defects; and even harder for them to get their hands on proprietary code in the first place. It’s so much easier to analyze OSS for flaws, then find products using that OSS, and then attack multiple company products that use that same code. Once they have found an OSS defect, they can literally attack 5, 10, or more products by exploiting the one or two defects that they find in the OSS code.
Axellio uses United States citizen workers and does not overly rely on the use of open-source code. Axellio carefully manages its use of open-source components and rigorously tests and evaluates the code used to reduce exposure to vulnerabilities. If you want additional information, check out this sales brief on the Axellio website.
About Axellio
Axellio provides extreme high-performance, scalable, compact, economical, and simultaneous time-series data ingest, storage and distribution solutions for the defense and intelligence community at speeds exceeding 200 Gbps. Axellio’s PacketXpress® platform focuses on network traffic packet capture, distribution, and analysis for cybersecurity monitoring and forensic analysis, and is operationally deployed with the US Army worldwide. For intelligence, surveillance, and reconnaissance applications (ISR), Axellio’s SensorXpress offers ingestion and storage of RF data from sensors and distributes it to analysis applications simultaneously at rates exceeding 200 Gbps. Learn more about Axellio at www.Axellio.com.