Tuesday, September 15, 2020

Common Sense

 

Common Sense

According to the Internet, the phrase “Common sense is not so common” originated with a Frenchman – Francois-Marie Arouet – who was a leading figure during the Age of Enlightenment. Francois, who had a knack for catchy phrases, began writing them at the age of 12. Eighteenth Century authorities were not always amused, and he often found himself in and out of the Bastille. He eventually moved to London and adopted the pen name Voltaire.

For those of us who work in the STEM fields, common sense is frequently the starting point from which we design our hypotheses and launch our experiments. The process, loosely defined as the “scientific method”, first appeared c. 1600 BCE, but is generally credited to Aristotle. The great Greek philosopher believed that because the world is a real thing, the best way to discern the truth is by experiencing it.


Such empiricism is the foundation upon which the scientific community has built its enviable reputation, reinforced by the rigor with which the method is applied, peer reviewed, and communicated. ”Follow the science” is an oft-heard refrain when complex choices present themselves.


While science has made contributions that changed the course of humankind, not all its discoveries have been trustworthy; there have been some notable failures along the way. Rarely has the path to any scientific discovery been without a few missteps, but some results received far too much credibility before eventually being debunked.


Attaching a famous name to a scientific discovery may add gravitas where none is warranted. One of the most respected physicists of his time, William Thompson (aka Lord Kelvin) was known for his contributions to the study of thermodynamics. Scientists he deemed “soft” (e.g., biologists and geologists) opined about an ancient earth, and so it was only natural for a “hard” physicist to try and prove them wrong. Noting that the once molten earth was cooling, Kelvin used his thermodynamics calculations to estimate that the planet could be no more than 20-40 million years old. His arrogance and pubic influence further underpinned this “truth.”


Lord Kelvin’s bluster held fast until the advent of radiometric dating which provided a more accurate method for estimating the age of things. We know now that the Earth is around 4.5 billion years old, and Kelvin should have confined himself to his eponymous temperature scale.


Even as renowned a physicist as Albert Einstein was not immune to scientific blunders. Albert was known for his elegant theories of General and Special Relativity, where he wrestled with the effects of gravity, mass and the speed of light. He also went along with a substantial number of his contemporaries in believing that the universe was static.


As great at Einstein’s General Theory of Relativity was, however, there was one catch; in order for it to work the universe had to be either contracting or expanding. He fixed the apparent contradiction by conjuring the cosmological constant, known in layman’s terms as a fudge factor. As gravity pulled the entire universe inward, this cosmological constant provided the repulsive force that kept everything from collapsing.


True to Aristotle’s vision, empirical data once again overruled learned speculation. Edwin Hubble’s 1929 observation of the red-shift of galaxies proved that they were in fact moving away from us, consistent with Einstein’s General Theory of Relativity but rendering his cosmological constant obsolete. When faced with the data, Einstein admitted his mistake.


The last example is perhaps the most familiar. Prior to their press conference in 1989, the names Stanley Pons and Martin Fleischmann were little known outside of their own specialty field of electrochemistry. That quickly changed when they announced to the world that they had achieved “cold fusion”, basically doing on the kitchen table what the Sun accomplishes at a temperature of around 27 million degrees. In spite of much well-deserved scientific skepticism, the world wanted their claim to be true because it could lead to an essentially endless supply of clean energy.


It was little more than a month later, after independent attempts to duplicate the results failed, that the energy spikes reported by Pons and Fleischmann were attributed to tritium contamination in their apparatus. Instead of receiving the Nobel Prize for their work, the two electrochemists became forever branded as the originators of “Fusion Confusion.” Shortly thereafter, work on building enormous multi-billion-dollar fusion reactors resumed.


Over 2000 years ago, Aristotle recommended that we pay attention to the real world and base our conclusions on what we see. While a great reputation or the promise of an epic breakthrough are compelling, we should always be wary of results that don’t pass the ever-reliable smell test. Good science requires innovation, knowledge, experience, patience, hard work, and peer review along with a healthy dose of common sense.


Voltaire himself said it best – “Cherish those who seek the truth but beware of those who find it.”

Author Profile - Paul W. Smith - leader, educator, technologist, writer - has a lifelong interest in the countless ways that technology changes the course of our journey through life. In addition to being a regular contributor to NetWorkDataPedia, he maintains the website Technology for the Journey and occasionally writes for Blogcritics. Paul has over 40 years of experience in research and advanced development for companies ranging from small startups to industry leaders. His other passion is teaching - he is a former Adjunct Professor of Mechanical Engineering at the Colorado School of Mines. Paul holds a doctorate in Applied Mechanics from the California Institute of Technology, as well as Bachelor’s and Master’s Degrees in Mechanical Engineering from the University of California, Santa Barbara.

Wednesday, September 2, 2020

Sylsog – Use it!!

 Syslog has been lumped in with SNMP as an ineffective, insecure way to monitor equipment and I thought it was time I threw my 2 bits in.

I like to use syslog for the following reasons;

- Centralized location for many devices

- Standard interface when using different vendor make and models

- Easy to define similar alerts across multiple devices

- Send alerts or ‘push’ as they happen

- I don’t need any device passwords to check device logs or events


A quick google search will reveal a ton of syslog applications, just be prepared to spend some time learning the various product differences but here’s what I look for; - Support for a large number of vendors and devices - The ability to add or customize alerts - Easy filtering engine or interface - Bonus; ability to set email alerts The only advice I can give when learning how to use syslog is to determine ahead of time what kind of devices you want to monitor and ensure it fits that need. For example, in most cases you will use it with network equipment, but in some specific circumstances I’ve used it with printers when they are in a public area. The other point worth noting is to test your syslog server in various scenarios, like device boot up, interface flapping and anything else you normally have to troubleshoot manually.



Tuesday, August 25, 2020

Validating Network Performance with a Throughput Test

 In the following weeks, our sponsor NetAlly will continue to share some tips that will help you troubleshoot your network faster. This second tip shows us how to move beyond basic connectivity tests by validating network throughput with the iPerf tool or the Network Performance Test.

---------------------------

The core job of a network is to reliably transport data from one point to another, as quickly as possible. If this is accomplished, end users can experience applications and services without skips, lags, and delays induced by the network.


However, when a problem does strike, engineers want to quickly determine if the blame lies on the network or not. Better yet, they should proactively baseline the network before problems start! One way to do that is to test the network path using tools such as iPerf or the Network Performance Test from NetAlly. Let’s look at both tools and see how they can help us spot network problems quickly.

iPerf is an open source software tool that performs a network throughput test between two endpoints. It must be installed on both ends of the connection under test. One end will act as a server, opening a service on the default port of TCP 5201 or 5001, depending on the version in use. The other end will act as a client, which initiates the connection to the server and runs the test. There are a number of options that can be used with iPerf, such as: UDP streams rather than TCP, bi-directional (versus one-way), custom TCP windowing, TCP MSS adjustment (maximum segment size), multi-threading, and many more. However, iPerf is not without its downsides. Since test traffic from both sides is generated from unknown hardware and software stacks, it is common for iPerf to be limited in the throughput it can achieve. Rarely is it able to truly fill a network pipe, especially on multi-gigabit connections or in high-performance data centers. In cases where we want to truly validate the maximum performance of the network, it is essential to utilize hardware-based throughput tests that do not suffer from the limitations of “off-the-shelf” endpoints.

The EtherScope™ nXG provides two ways to stress-test the network - the Network Performance Test app or the built-in iPerf app. Both tests require some type of endpoint on the other side. The iPerf app requires a software iPerf server on the other side, just like the standard test does. In the case of the Network Performance Test, the source device can connect to up to four remote endpoint devices for simultaneous connection testing. The remote endpoint can be a software reflector, a hardware reflector, or a remote peer. Let’s discuss the differences in the three types of endpoints for the Network Performance Test.


Software Reflector:

This is a software application that can be freely downloaded from the NetAlly website and loaded on a Windows 7 and above device. There is no licensing limit on the number of software reflectors that may be downloaded and deployed. The software reflector provides an easy means of configuring a device to act as an endpoint for the Network Performance app on the EtherScope™ nXG.


Being a reflector, the software will take any packet received, flip the source and destination MAC and IP addresses, then send that packet back to the source. On the EtherScope™ nXG, this will show up as a roundtrip test result.


Due to the limitations often suffered by software on a laptop or unknown device, the software reflector is not recommended for throughput rates above 100Mbps. Above this rate, it is difficult to determine if any packet loss is the result of the endpoint reflector device dropping packets or the network itself. This test is best used for doing lower-throughput pre-deployment testing of networks for services such as VoIP.


Hardware Reflector:



A hardware reflector functions much the same as the software reflector. The main difference is that a hardware reflector can process packets at full line rate. This ensures that any loss observed in the test is the result of the network dropping packets, not the test instrument itself. The LinkRunner AT 2000 and the LinkRunner G2 from NetAlly both act as hardware reflectors for the EtherScope nXG and will reflect traffic at up to 1Gbps.


Peer:

Unlike a reflector, the use of a peer allows the performance metrics to be tested in both directions. The endpoint will generate its own traffic stream back to the initiating tester, rather than just reflecting the same stream back to the source. If packet loss is observed, it can be determined that loss occurred on the way to the peer or on the way back (bi-directional). In addition, remote peers can support asymmetrical data rates. This will allow traffic patterns to be created that better simulate actual application traffic, such as file transfers.

Devices that may be used as peers include OneTouch AT 10G, EtherScope™ nXG, and the NETSCOUT OptiView® XG. Each of these devices may be configured in peer mode and used as an endpoint for the Network Performance app on the EtherScope™ nXG.


The Network Performance Test with a hardware reflector or peer endpoint has the ability to generate and measure line rate traffic, up to 10Gbps, giving it an advantage over iPerf with a software remote. Metrics include packet loss and overall throughput capacity, latency, jitter, and QoS. When a problem strikes, engineers can validate network throughput without needing to consider whether the test is limited or if the network is really impacted by a problem.


Whether you are using iPerf on both ends or (preferably) the EtherScope nXG Performance Testing app, throughput testing is a great way to ensure your network is capable of transporting application traffic from end-to-end without packet loss and high latency. These tests will help to quickly determine whether the network is to blame (or evidence to the contrary) when application performance suffers.



Popular post