Thursday, December 19, 2019

Packet Capture vs Accurate Packet Capture (Chris Greer)

I just wanted to take a few minutes to share the results of some of the "Capture Limit" testing I have been doing in my lab. These results were shared at Sharkfest Europe 2019 in Estoril, Portugal. The purpose of the session was to discuss the considerations of building your own capture appliance. I am not trying to promote any specific product; rather my goal is to demonstrate the limits where the accuracy of a capture on a laptop becomes questionable.

During my performance testing, I found that there was a huge difference between capturing everything (no packet loss) and capturing everything correctly (packet timing is accurate). Before getting into the results of the testing, let me tell you a bit about my setup.

My line-rate 1Gbps traffic generation box sent traffic to a machine that was benign IP (Protocol ID 99). The connection was tapped twice - one feed sent to my MacBook Pro off a network tap, and the other sent through a ProfiShark device to another capture point. The ProfiShark does the capture on the device itself, while the network tap just forwards the traffic to the capture device where packets are collected and timestamped.

The traffic stream was sent as either small packets (100 bytes), medium sized packets (512 bytes) or large packets (1518 bytes). My traffic generator could only do one packet size per test, so I ran it a few times to see the differences. I gradually reduced the throughput rate until capture point A would (a) keep up with the ingress rate, and (b) accurately timestamp the packets.

Here were the results, with 100,000 packets generated per test:

Let's examine these results.

In the test, 100,000 packets were sent to the target with varying packet sizes and throughput rates. Notice that in capture point A, I was only able to capture all the packets when the rate was turned down to about 250Mbps. Even then, there was a ton of false jitter in the packets. The inter-packet delta times were all over the place, with a maximum value of around 20 milliseconds. This is pretty bad considering that the deltas should have been no higher than a few microseconds.

The second thing to note is that the timestamping was off on Capture Point A until the rate was backed down to about 10Mbps. At this point, the delta times smoothed out and the capture device was able to keep up with the ingress traffic, timestamping it appropriately.

These tests were run both in the Wireshark GUI and on the command line with dumpcap. The results were only slightly better with dumpcap.

All the while, the hardware-backed appliance was able to keep up with line-rate 1Gbps, with correct timestamping.

Conclusion

If I am going to capture a packet stream that is any higher than 10Mbps throughput, it's best to do it with my ProfiShark or another purpose-built appliance. Capturing all the packets is not enough for me - I also need them to be time stamped correctly. Hence the difference between packet capture, and accurate packet capture!

Got questions? Let's get in touch!





Sunday, December 15, 2019

The Rise of Artificial Stupidity (by Paul W. Smith)

 

"Never underestimate the power of stupid people in large groups.”George Carlin

Judging from the number of times we use the word “stupid” in our daily discourse, you might conclude that it’s on our minds a lot. It should come as no surprise that Merriam-Webster has numerous definitions for stupid. We have all vilified a stupid computer that loses our work, or one that insists on peppering us with stupid popups. Exasperating events, as well as those which hold no interest for us, happen much too often.

It is a rare person who has never made a stupid decision, although those committed by unthinking individuals other than ourselves are much more common. For folks who are slow of mind, prone to unintelligent choices, acting in a careless manner, or just lacking in reason, MW also has a word. Stupid.

Whether driven by our preoccupation with stupidity or by our giddy infatuation with technology, we are rapidly pressing onward with the development of Artificial Intelligence. American author Sebastian de Grazia, who is often described as the “Father of Leisure”, predicted in 1967 that by the year 2020 automation technologies would give us a 16 hour workweek. Sebastian also warned that this would lead to boredom, immorality and personal violence. We are still waiting for the 16-hour workweek.

As with other shiny new technologies, the push to develop AI is driven by good intentions; better medical care, safer vehicles, and more efficient cities are often cited. The lingering fear that AI will take over our jobs does not seem to be impeding progress. Although the predictions from the “Father of Leisure” remain enticing, the reality is that some of us may end up with 16-hour workdays, while others are left with zero-hour workweeks.

While the objections to AI are focused on the jobs it will take, not much has been said about the subtle changes it will make in our culture. What happens when our daily tasks are taken over by computers? Do we understand the impact of the things we routinely do on our physical and mental health? We can already see how voice guidance from a GPS distorts the spatial awareness we once got from looking at a map. What other AI-induced changes await us?

Who among us wouldn’t love to have a personal assistant, one who would answer our phone, schedule our appointments and make restaurant reservations? Google Duplex is closing in on exactly that. Not only will it screen your calls and secure those coveted tables, it will do so in a convincing way, mimicking all the pauses, “ums” and “ahs” that humans typically use. Your maître d’ will never suspect that he was talking to a machine. We provide the basic data and constraints and AI takes over the human role, engaging with other humans by forming sentences and communicating intent.

If AI can communicate convincingly once given the rules, it’s not hard to imagine that at some point it will be able to listen and record information. Note taking, whether in meetings, lecture halls, or even courtrooms, is a tedious chore that most of us would gladly hand off to a machine. Won’t it be nice when we can depend on AI to hand us the written transcript of anything we desire? The connection between hearing, transcribing, prioritizing and recalling the material will be lost.

As for communicating in different languages, AI has been doing that in more ways than we realize. As annoying (or even humorous) as auto-correct can sometimes be, it is essentially the beginnings of a system which translates from one language (a grammatically incorrect or wrongly spelled one) to a proper one. That AI system can not only convincingly make restaurant reservations, but now it can also make them in French (and in your own voice).

Where AI really veers out of its lane is in the area of speaking and writing. Digital media has already started taking away our pencils, and with them a fundamental connection with how we think and communicate. Our various forms of language are not annoying barriers between us so much as windows into our innermost thinking. Adjusting tone and presentation, reflecting on and modifying our ideas and assessing our purpose are real-time processes that occur during human-to-human connection.

Noble Laureate and famous free-thinker Richard Feynman wrote about AI at a time when the technology was in its infancy. Dr. Feynman drew an analogy with the development of a machine to run fast like a Cheetah. You could certainly study videos of running Cheetahs, connect motors, linkages and software, and probably build a machine that accurately mimics a Cheetah. You might also note that it’s much easier to build a faster machine using wheels, or perhaps even one which flies just above the ground. AI will never think exactly like humans, but it can do some things much better and faster.

There are tedious (or dangerous) jobs that make good candidates for replacement. Assembly line workers, tax preparers or truck drivers are a few that should probably consider new careers. A radiologist reading x-rays or an oncologist formulating the most effective chemotherapy treatment will be difficult to replace but could benefit from AI augmentation. Some jobs will be missed in the short term, but the long-term benefit to humanity will be worth it.

Artificially Intelligent devices may take accurate notes, but will they ever capture that little twinkle in the Professor’s eye hinting that a certain topic will be on the next exam? When I struggle through a conversation in the small central Italian village of Soriano nel Cimino, am I just getting directions or am I connecting to the culture in profound ways that leave a lasting impression on my personal worldview? The genius of AI lies not in pushing it into all corners of our daily lives, but in recognizing where it frees us from ordinary activities, and where it begins to take away that which makes us human.

“The difference between stupidity and genius is that genius has its limits” Albert Einstein

Author Profile - Paul W. Smith - leader, educator, technologist, writer - has a lifelong interest in the countless ways that technology changes the course of our journey through life. In addition to being a regular contributor to NetworkDataPedia, he maintains the website Technology for the Journey and occasionally writes for Blogcritics. Paul has over

40 years of experience in research and advanced development for companies ranging from small startups to industry leaders. His other passion is teaching - he is a former Adjunct Professor of Mechanical Engineering at the Colorado School of Mines. Paul holds a doctorate in Applied Mechanics from the California Institute of Technology, as well as Bachelor’s and Master’s Degrees in Mechanical Engineering from the University of California, Santa Barbara.

Popular post