Quantcast
Channel: Network Monitor
Viewing all 40 articles
Browse latest View live

Network Monitor 3.4 Beta Released on Connect!

$
0
0

We are extremely excited to announce that the Network Monitor 3.4 Beta has released on Connect. If you haven't done so already, please sign-up (for free) and help us test the new version while exploring it. There are some great new features, UI enhancements, performance updates, and new APIs. Let's take a quick gander and see what's new.

UI Enhancements

Our focus in this arena was to provide a better user experience and make some great features easier to find. In that respect there are a few more buttons which make things like Color Rules, Aliases, and Columns easier to locate. And now we provide text next to each icon so that you no longer have to guess at what the hieroglyphic means. We also wanted to make it easier to customize the UI, as we understand there are many different ways you use our product. Let's highlight a few specifics:

  • Window Layouts - We now include 3 different, completely customizable layouts. Need more horizontal real-estate? Use the Diagnostic layout to use the full width of the screen to view the most columns possible. Don't like the way a layout looks? Then customize the layout by moving windows around. See our Customizing the User Interface blog for more information.
  • Column Management and Layouts - Besides making it dead simple to find the column customizer, we provide a set of column layouts that you can customize and switch between. Part of the reason for these layouts to support our new UTC Timestamp feature, explained below. But better yet, there are some trouble shooter layouts to help look at data from the TCP or HTTP layers.

For instance for the TCP Troubleshooter:

clip_image002

And for the HTTP Troubleshooter:

clip_image004

You can also customize any of these layouts to suit any need and save them for later.

  • Color Rules - Again we created a button upfront so our color feature is exposed and simple to access. But we also made it possible share color rule sets by exporting your color rules. Now you can create a color rule set to highlight a problem and share that along with the trace.

clip_image006

Creating a color rule is as easy as creating a display filter. Just right click on a field in the frame details and select, "Add Selected Value as Color Rule."

clip_image008

We’ll have some more information on using Color Rules effectively in our next guest Blog.

  • "Live" Experts - Previously you could only run experts on a saved trace. This limited the usability of experts and many folks complained that they couldn't find the menu item. So now we enable this feature on "Live" captures by creating a snapshot of the data before launching the expert. You’ll just be prompted to save your capture before the Expert runs.
  • Fixed-Width Font - We've heard that you want to view your frame summary information as a fixed width font. Enabling this feature allows you to look at data so that it lines up and is much easier to track with the eye, especially when looking for differences.

Performance Enhancements

In our continued effort to provide faster analysis and processing power, we have provided some new features which enhance your performance and help you get your work done more quickly.

  • Parser Configuration Management - Parser performance has always been a nagging issue. The more complete the parsing, the slower the performance. And switching to a simpler parser set required some acrobatics not easily accessible by mere mortals. Now switching between parsers sets is as easy as choosing one from the Parser Profiles drop down menu.

clip_image010

And not only is it easy, it's fast! We actually pre-build each of the default parser sets during the install. Furthermore, any parser sets you customize and make your own are also cached. So, you can use the Default or Faster Parsing set to drill down, and once you've narrowed your search down you can quickly switch to a more complete parser set to see the details.

If you have made customizations in the past, you will have to build parser sets for those. But even that process has improved. You can create parser sets based on existing ones. And when you create a new customized parser set, your local parsers directory under "Network Monitor 3" is automatically included.

clip_image012

  • High Performance Filtering - So you're trying to capture a trace from your tricked out, 1gig network connected, File/SQL/HTTP server. But the traffic is coming in so quickly that you drop frames left and right. Using a High Performance Filter may be the solution for you. Using a limited set of fully qualified filters, (like Frame.Ethernet.IPv4.TCP.Port == 8080), you can attempt to filter out more of the incoming traffic before it reaches the disk. This way you avoid the disk load to the system from buffering in situations where you are interested in a fraction of the total traffic. Please review the documentation in our Help file for more details. You can also find some example filters in the Standard Filters under NM34 High Perf Capture. We'll have a blog that talks specifically about this feature in the near future.
  • High-precision Timestamps - In the past our driver didn't use the highest precision time stamp possible. Now instead of seeing a bunch of frames the same time, they appear with more granular time stamps.

clip_image014

Other Features

While I'm not going to mention every little feature we've added, here are a couple more of the most notable additions. You can review the release notes in the help directory for a full list and explore the UI to see what else has been updated.

  • Process Tracking in NMCap - By adding the /CaptureProcesses switch to NMCap you can capture process information just like the UI.
  • UTC Timestamps - One problem in the past with captures is that the time stamp was always based on the time zone where it was taken. This made it difficult to compare time stamps with data, for instance Event Logs, whose time stamps are displayed relative to your local time zone. You now have the option to view new traces taken with Network Monitor 3.4 using UTC relative timestamps. As I mentioned above, the column layout feature is related. When you open a capture we detect the capture file format and pick the appropriate column layout. But if you need the view the trace with the old behavior for new 3.4 traces, you can always manually select the 3.3 column layout.
  • 802.11n & Raw IP Frame Support - Network Monitor now supports monitor mode on 802.11n networks on Microsoft Windows Vista SP1 and later operating systems as well as Raw IP Frames on Microsoft Windows 7. Raw IP interfaces provide traffic from the IP level up.  Network Monitor 3.4 now supports seeing this traffic properly from those types of interfaces in Windows 7.
  • API Updates - We've added support for the new profile sets in the API so you can take advantage of this new feature. You can also create Driver Level filters using an offset/pattern match. This performance enhancement can provide even great capture speeds with less process overhead. Check out the Help file for full details about the new API.

Join our Connect Community and Download the Beta

Access to the Beta does require that you join our "Network Monitor 3" Connection. But don’t worry, it’s free! And it’s really quick if you already have a Windows Live ID. Once you become a member you get access to the latest Beta's and occasional News Letters (if you opt-in) letting you know what is going on. As a member you can also help us improve the product by filing bug reports for problems you encounter. Hope you enjoy the new version and we look forward to your feedback and reports!


Parser Profiles in Network Monitor 3.4

$
0
0

Parser Profiles are a new feature available in our 3.4 Beta. Rich parsers provide detailed information about every part of packet. However this detail comes with a price as it takes longer to parse and filter frames. Parser Profiles are designed to help in this regard by allowing you to quickly switch between profiles based on your need for speed vs. detail.

Filter with Default, Switch to Windows

This simple graph below shows that the more complex the parser profile, the more detail you get, and the slower parsing is.

clip_image001

The advantage of using multiple Parser Profiles is that you can use a faster profile to narrow down your search first. Then if you need to, you can switch to a more detailed parser set to explore with the higher fidelity. Which profile you start with depends on what you need to filter or see at a high level, but here are some general descriptions of each parser profile to help you decide. Each profile described below is includes all the options of the profiles mentioned before it.

Pure – The pure profile does essentially no parsing. Its main purpose is to provide some kind of parser if for some reason one doesn’t exist. You can filter on frame numbers and time, and some other things. To find the complete lest, you can type “FrameVariable.” in the filter window and look at the Intellisense for all filterable fields. You can also use the ContainsBin plugin, though its performance is not affected by the parser set.

HPC – This is our High Performance Capture Profile and its main purpose is to provide an optimized profile for the High Performance Capturing feature. However, you can also use it when filtering speed needs to be fast. But its filtering capability is limited to TCP and UDP protocols and below.

Faster Parsing – This profile adds some more protocols into the mix like ARP, HTTP, and some of the name resolution protocols for instance DNS and NBTNS. But it leaves out some heavier protocols like SMB and SMB2.

Default – This profile includes SMB , SMB2 as well as RPC. It’s fairly well rounded and will probably be enough parsing for most general cases. However it does not parse into the application layer so RPC and Soap based protocols display as stubs only.

Windows – This parser profile contains every windows based protocol plus the SQL TDS protocol. The parsing is incredibly complete and will show most application layer protocols. But it is also the heavyweight in terms of cost of parsing.

There are even more parser sets available from our Codeplex Parser site as well as directly from the Office team. But as you might have guessed, using these parser profiles will slow down parsing and filtering even above that of the Windows set as they have dependencies on the Windows parser profile.

Parser Customizations

In some cases you might want to modify a parser or add a new parser. This procedure has changed a bit from NM3.3. To make a parser change, you have to create a new parser set. The easiest way to do this is to create a new parser set in Parser Profile Options window and use a current parser profile as the starting point.

As an example, let’s pretend we’ve made a modification to TCP.NPL. If you are making the change using the Network Monitor parser window, you’ll get an error message when you try to save your change. This message is stating that you cannot save the parser in the default location because it is protected. This is intentional because we want you to have a copy of the original. But you can hit Yes in the dialog to save to a different location.

clip_image002

You are now prompted to save the file to your local parser directory. While you can choose another location, the “%HOMPATH%\documents\Network Monitor 3\Parsers” folder works well as this is automatically added for you when you create a new parser profile.

Now that the file is saved, open the Parser Profile option dialog by pressing the Parser Profile button dropdown and select “Parser Profile Options…”. Next select a profile you wish to use as a base. Normally this would be Default or Windows, but it depends on the scenario and depth of parsing you’d like to have as we discussed previously.

You can override the name and description to make it more meaningful to you. If you did select the default location for the parser file, you can add that new directory and move it to the top of the list. But as you can see, your “Network Monitor 3\Parsers” directory has been added by default.

clip_image003

Once you hit OK and exit the Options dialog, you can select your new profile from the Parser Profile drop down button under the User Defined Profiles.

clip_image004

The first time you select the profile, it will need to build the parser profile set, but afterwards, the prebuilt binary will be loaded quickly.

Pick the Parser Profile Right for the Job

With Network Monitor 3.4 you now have even greater flexibility to choose the parser profile that provides you the best performance with regards to the task at hand. And as each of the built in profiles are built during the install, they are all quickly available with a few simple clicks.

Reducing Dropped Frames with Network Monitor 3.4

$
0
0

 

by Darren J. Fisher – Network Monitor Development Lead

Capturing network traffic is actually a very stressful task for most computers. With modern networks, traffic can arrive to a system at astounding rates. Most machines built these days have at least 1 Gbps network interfaces. When connected to a network of equal or faster speed, if the traffic consumes just ¼ of the capacity of this interface for 10 seconds, it will have processed 298 MB of data. Under heavy loads, the interface can reach ¾ or higher utilization of interface bandwidth, that’s approaching 1 GB of data in just 10 seconds!

Usually when a computer is receiving that much data, it is also doing other processing on it. Things like decoding a video stream, saving a file, or rendering an image. These types of operations by themselves would test the performance of typical computers. Let’s not forget the other things your computer is doing like scanning for viruses, drawing your desktop, running your gadgets, etc. Now add the cost of capturing every bit of the network traffic and saving it to the disk. It becomes easy to see how we might lose a packet here or there.

The latest version of Network Monitor provides very accurate statistics on how many frames it dropped during a capture. Dropping frames with Network Monitor can occur in three places typically:

  • Network Interface: Frames dropped here are a result of traffic arriving too fast for your network hardware to decode and send it to the operating system. Dropping frames here is actually pretty rare. Most network interfaces live up to their speed limits and with modern network infrastructures your connection will rarely reach those limits.
  • Network Monitor Capture Driver: Without going into too many details, Network Monitor uses a kernel mode driver that monitors each network interface. When capturing, it makes a copy of every frame it sees and tries to place them into a memory buffer so that the Network Monitor UI, NMCap, or your API program can process them. There are a finite number of buffers. If they are all full, we must discard frames until an empty one is available.
  • Network Monitor Capture Engine: This component of Network Monitor receives frames from the Network Monitor Capture Driver and attempts to save them to a temporary file on the local disk drive. If you capture long enough, you will reach the storage quota on either your operating system or the one configured in Network Monitor. When this happens, the engine must discard the frames as there is no space to save them.

Dropping frames is an unfortunate reality of capturing network traffic. It is close to impossible to capture 100% of traffic in 100% of capture scenarios. Even with so called ‘typical’ scenarios, traffic rarely has a steady flow. There are always dips and spikes which apply sudden pressure like a rogue wave. When that happens, Network Monitor may be able to deal with it depending on how it is configured.

Typical Frame Dropping Scenarios

Let’s take a closer look at a few scenarios that typically result in dropped frames, why Network Monitor drops in these scenarios, and what you can do to deal with them.

Using a High Performance Capture Filter

In this scenario, most of the traffic that arrives at the computer is noise. So the user applies a capture filter. The desired outcome is a smaller trace containing only the frames that are of interest. We also want to avoid saving frames that we will just throw away later.

Why are frames dropped in this scenario?

A good analogy for understanding why is filling a pool with a water hose. The rate at which the water comes out of the hose remains constant. Unobstructed, every drop of water will flow into the pool until the pool is full. But if you add a filter between the hose and the pool, what happens? The filter slows down the flow of water into the pool but not from the hose. Depending on the thickness of the filter, the pressure in the hose will build up fast or slowly to the point where the hose will leak.

The pool represents your disk, the hose represents the network interface, and the thickness of the filter represents the complexity of a capture filter expression.

In order to identify the frames which pass the filter condition, Network Monitor must evaluate each and every frame it sees. Only when a frame passes the condition will it be saved in the trace. This requires a third component of Network Monitor to enter the mix, the Parsing Engine.

Parsing a frame is a very slow operation. To put this into perspective, for a batch of 500 frames on a fast machine, it can take a minimum of 15 times longer to evaluate a filter on those frames than it would take to simply save them to the disk. This time factor can grow if the filter expression is complex and/or a larger set of parsers is used (thicker water filter).

Remember above we talked about the driver using memory buffers to store frames that it saw at the network interface? Well those buffers are emptied by saving the frames they contain to disk or applying a filter to them first. Since the number of buffers is finite, if we do not empty them faster than they are being filled, we will drop frames. Going back to the analogy, the memory buffers represent amount of pressure the hose can withstand. The hose will leak unless we remove the filter; it is not a matter of ‘if’ but ‘when’. How long it takes to leak depends on the thickness of the filter and the pressure limit of the hose.

The same thing happens with a high performance capture filter. If the filter is in place and the traffic is steady at a moderate rate, the driver will eventually run out of memory buffers to fill. When this happens, we drop frames.

How can this be addressed?

In this scenario, there is little we can do to control rate at which traffic arrives to the computer. We also may not be able to do much about how complex the filter is. We can however increase the pressure limit of the hose. We do this by increasing the number of memory buffers that are available to the driver. This is done by modifying a registry key:

clip_image002

AdapterBufferCount can be set to any value between 4 and 128. The bigger the number, the more memory that is available for buffering packets. The actual amount of memory allocated is the value for this key multiplied by 512KB. The default setting uses 8 MB.

Conservation should be the rule when changing this value. The memory which is allocated as a result of this value is supplied by physical memory. For example, if your system has 1 GB of RAM installed and you use a value of 128, that will cause 64MB of RAM to be used which is 6% of your total RAM. That may not seem like much until you add the RAM consumed by other OS components. Less physical RAM leads to increased page faults (swapping) which can impair the overall performance of your system while capturing.

This value is also per network interface enabled in Network Monitor. On typical Win7 machines, there are usually 3 interfaces enabled by default. This means that with a setting of 128, you can potentially consume 192 MB of physical RAM.

Additional techniques

The High Performance filtering feature has two additional levers that will allow it to yield (remove the water filter) temporarily so the capture driver can catch up. It does this by caching the frames to the disk and performing the filter operation on the cache instead.

Note: This has a drawback of using more disk space and can actually create an even more insidious situation. To be explained below.

The first setting instructs the feature to turn itself off if the capture driver indicates that there are only a few buffers remaining. The setting is a registry key, DeferOnBufferCount.

clip_image004

When the number of full buffers is equal to or greater than this value, the High Performance filtering feature temporarily turns off. Instead of filtering frames as they arrive at the driver, they are placed in a cache and filtered later. We mentioned earlier that saving frames can be up to 15 times faster than filtering them. The time saved will allow the capture engine to process buffers faster. This in turn will make free buffers available to the driver faster. The end result is not getting into a state where dropping frames is eminent.

The second setting instructs to feature to turn itself back on when the number of full buffers is low enough to safely resume filtering as frames arrive. This is also configured with a registry key, DeferOffBufferCount.

clip_image006

“Defer” in the names of both of these keys refers to filtering the cache. Once triggered by the first setting, Network Monitor will continue to cache frames until the number of full buffers is equal to or less than the value of this key. When that happens, filtering of frames as they arrive will resume. This cycle will repeat as necessary based on the buffer conditions. DeferOnBufferCount has the highest precedence. Regardless of the off setting, if the number of buffers consumed is greater than or equal to DeferOnBufferCount, no filtering will on the cache will take place.

Together, these settings allow Network Monitor to react to spikes in normal traffic flow. The value for DeferOnBufferCount should be set to not trigger deferring under normal capturing conditions. The value for DeferOffBufferCount should be set low enough so that Network Monitor does not turn off deferring during the spike.

For example, under normal traffic conditions, the capture driver has 2 buffers full on average. During a spike, as many as 10 buffers fill up. The total number of buffers is 16. Under these conditions, a good setting for DeferOnBufferCount would be 6. That will not trigger deferring if a small spike occurs where 3 or 4 buffers become full but for a large spike, it will be triggered. DeferOffBufferCount in this scenario should be set to 3. Setting this too low may keep deferring on indefinitely. Too high and oscillation will occur where the feature turns on and off repeatedly which is not desirable.

Using a Normal (or no) Capture Filter

If a capture filter is not a High Performance filter, it is a normal filter. This type of filter is always applied to the frame cache. Filling the cache and processing of the cache occur simultaneously in separate threads; thus making the cache a shared resource. We will call the thread that processes the cache a proxy. The proxy and the capture engine effectively fight over who can access the cache.

Why are frames dropped in this scenario?

We like analogies so here is one for this; let’s revisit our pool but this time we want to drain the pool too. Our pool is special; we cannot fill the pool and drain it at the same time. If we are filling it, we must close the drain. If we are draining it, we have to block the hose. If the water is still on while we are draining, pressure will build up in the hose. If we drain it too long while the water is still on, our hose will leak because of the pressure build up. If we turn off the water however, we can drain as long as we want until the pool is empty.

Ideally we do not want the hose to leak. So as long as the drain is closed, we are fine. But we need to open the drain occasionally otherwise our pool will overflow. The trick is to keep the drain open long enough to keep the pool from overflowing but short enough to not make the hose start leaking. Our hose can withstand some pressure but not for long.

In this example, the pool represents the cache of frames. The hose represents the capture engine which fills the cache. The water is the incoming traffic from the capture driver. The drain represents the thread which processes the cache. The pressure in the hose represents the memory buffers that the capture driver uses. You can guess what leaking represents; dropped frames

Coming back to Network Monitor, we cannot afford to have the capture engine lose the fight over the cache for too long. We need to process frames in the cache while the cache is being filled however. The proxy must periodically lock the cache, remove frames from it, and then relinquish it. Frames in the cache are pending frames. Once the proxy has a frame, it can apply a filter to it or if there is no filter, display it (Network Monitor UI only).

Note: NMCap with no filter does not use a proxy; the cache is the target capture file.

Displaying a frame and applying a filter take time. While the proxy is doing this, the cache is not locked allowing the capture engine to fill it. Once the proxy is done with one frame, it gets another one from the cache, requiring it to be locked again for some period of time.

Going back to our analogy, if these periods of time are long, the hose will experience short rises in pressure spread over long periods of time. This is ideal because the water does not really flow at a constant rate; it dips and spikes over time (like network traffic). When it dips, the pressure is relieved.

However, if the periods of time are short, the rises in pressure will be closer together causing the pressure to build up faster. If it gets too high before the dips come, it will reach the pressure limit and leak.

This is what happens if the proxy’s processing time is too short. It will access the cache at a faster rate and starve the capture engine (proxy wins the fight too often). Starving the capture engine will cause it to slow down emptying buffers since the engine cannot empty them while the cache is locked. If the buffers are not being emptied fast enough, the capture driver will fill them all and when that happens, it will drop frames.

The ways you can cause the proxy to speed up are:

How can this be addressed?

The proxy can also be considered an engine. Sometimes engines in cars are too powerful so the manufacturer installs a governor. The job of the governor is to keep the engine from going too fast so the drivers do not kill themselves. Our proxy also has a governor to keep it from causing dropped frames. This is actually a registry key, MaxPendingFramesPerSecond.

clip_image008

This controls how many frames per second the proxy can process while capturing; resulting in cache locks per second. The default value for this setting (0xFFFFFFFF) effectively turns the governor off (unless you have a supercomputer). If you pause or stop capturing, the governor is not applied.

If you see dropped frames in one of these scenarios. Set this value to a moderate number, such as 1000. If frame dropping continues, try setting lower values until the dropping stops. You can also start with a low value and increase it steadily until you see drops, then lower it slightly. The value that will work best for you depends on the processing power of your computer.

Increasing the setting for AdapterBufferCount may offer some relief but it is usually temporary. If your capture is limited to a small, fixed period of time (i.e. < 5 minutes), this method may be a reliable alternative.

Conclusion

As you can see, there are multiple factors that can affect the ability of Network Monitor to capture every frame that arrives at the computer. There are also multiple settings exposed that can be used to fine tune your experience. While there is no silver bullet for perfect capture performance, many scenarios can be dealt with by applying the techniques described above. Being able to recognize why frames are being dropped is an important first step in finding the best solution.

The solutions above can be considered ‘high-tech’ solutions. There are also some very simple, low-tech steps that you can take which may also yield the results you are looking for.

Eliminate Disk Contention by Saving to Another Disk Drive

clip_image010

Network Monitor exposes a setting for where the frame cache is stored; via Tools->Options from the main menu. If the default location is also a heavily used disk drive (i.e. the system disk), try a different drive if you have one.

Reduce the Processing Load on the Computer

This is obvious but often easier said than done. Try to have as few processes running as possible while capturing live traffic. Avoid unnecessary disk access. Try to have plenty of physical memory free to avoid virtual memory paging.

Use NMCap to Capture When Possible

NMCap provides the same capture features as the Network Monitor GUI application and it offers higher performance. If possible, use NMCap to capture traffic instead. The GUI application can be used to load and analyze the resulting capture file.

Use ‘weaker’ Filters

Everyone knows the joke where the guy says, “Doc, it hurts when I move my arm like this.” The doctor replies, “Well, don’t do that.” In our world the joke would go, “Microsoft, Network Monitor drops frames when I use this filter.” To which we may reply, “Well, use a different filter.”

Occasionally, Network Monitor users; especially network protocol gurus; will create capture filter expressions that attempt to be a bit too precise. Creating such a filter will yield the fewest and most desired frames but it often comes at a cost of complexity which can lead to dropped frames.

A not so obvious remedy is to use a less complex filter. This may result in more frames captured but may also result in fewer frames dropped. The ‘Display Filter’ functionality in the Network Monitor GUI can be used to see the exact frames desired with a more complex filter once the capture file is opened.

Network Monitor 3.4 has Released!

$
0
0

I’m proud to announce the release of Network Monitor 3.4 to the Microsoft Download center. We’ve included a bunch of new exciting features and updates. A new high performance capturing feature allows you to capture on faster networks without dropping frames. Parser profiles provide a simple way to increase filtering/parsing speed and allow you to switch quickly between various parser sets. And UI updates like Color Rules, Windows Layouts and Column Management give you flexibility to do cool customizations to help you work the way you want. Please visit our Beta announcement to get a rundown of the new features.

As always, you can get support on our Network Monitor forum. There the community and our team can help answer questions about the UI, NMCap, API, parsers and even assist with troubleshooting scenarios.

With 3.4 complete we are now setting our sights on a new version with some grand goals for ITPros and Developers meant to take protocol troubleshooting and development to the next level. And as always our parser development continues to evolve so visit our CodePlex parser site frequently to get the latest parsers as well as standard and color filter updates. In fact we’ve had some recent updates to the standard filters so the latest CodePlex parsers, build 2351, has some updates that are even newer than the 3.4 release. Stay tuned and enjoy Network Monitor 3.4!

Blog Makeover: Network Monitor Landing Page

$
0
0

We’ve redesigned the Network Monitor Blog in order to make it easier to find resources that have accumulated over the years. The FAQ on the TechNet wiki is a user editable resource, so feel free to extend it to include any frequently asked questions other users might benefit from. Another wiki resource is the Common Fields used for Filtering link which can help you realize what is available in terms of filtering for a given protocol. Again, feel free to extend and contribute to the community. As well, many of the learning Blogs have been organized into one place as well as support and video resources. So take some time to explore the new menu bar and learn more about Network Monitor 3.4.

Using Color Rules to Show Direction

$
0
0

By Jin Feng

Differentiating client requests and server responses can provide a clear-cut view and make it easier to understand what’s going on within a trace. Normally, with a flat trace this can be hard to determine and distinguish one packet from another. However with Network Monitor Color rules, it enables us to highlight frames matching a rule in a specified color and using a specified text style, which we can use to do this job very well. For more information on creating color rules, you can reference this blog. Let’s drill down to see how it works.

Finding the Right Direction Filter

The most import thing is to identify the correct filter expressions and then use that to apply any color rule. We can find “Direction” information in different protocols at different protocol layers, but substantially this information comes from the Network Layer. We have already defined a property, “NetworkDirection”, in Network Layer protocols like IPv4, IPv6 and IPX. For example in IPv4 it is defined as:

[ Property.NetworkDirection = (SourceAddress > DestinationAddress) ? 1 : (SourceAddress == DestinationAddress) ? 0 : 2 ]

Note: Properties are defined in the parser files and are set as a frame is parsed. You can consider them as meta data and they have multiple uses. See this Wiki Reference which describes some useful properties and provides a more complete definition. The example code can be found in the ipv4.npl file (installed by default in C:\ProgramData\Microsoft\Network Monitor 3\NPL\NetworkMonitor Parsers\Base\).

So a value of 1 represents one direction and 2 represents the reverse one. The actual direction is arbitrary based on which address is the larger one numerically. Thus for most traces we can define different color rules with the following two filter expressions to show direction:

  • Property.NetworkDirection == 1
  • Property.NetworkDirection == 2

Identifying Local Traffic

You might ask how about “Property.NetworkDirection == 0”, is that possible? The answer is yes, if you can configure your computer to capture traffic from itself.  In this case the above filter expressions won’t work anymore, and we need more information from Transport Layer, for example TCP. There is already a property “TCPDirection” defined in TCP.npl:

[
Property.TcpDirection = (Property.SourceNetworkAddress>Property.DestinationNetworkAddress)?1
:(Property.SourceNetworkAddress<Property.DestinationNetworkAddress)?-1
:(Property.SourcePort>Property.DestinationPort)?1
:(Property.SourcePort<Property.DestinationPort)?-1:0
]

Using the following two filter expressions will show direction correctly even when the src/dst network addresses are the same.

  • Property.NetworkDirection == 1 || ( Property.NetworkDirection == 0 && Property.TcpDirection == 1 )
  • Property.NetworkDirection == 2 || ( Property.NetworkDirection == 0 && Property.TcpDirection == -1 )

Note that this works for TCP traffic only. But as we now have a property for UDPDirection, the same color filters could be created for UDP traffic.

Broadcast Traffic

Another possibility is broadcast traffic. The “direction” is defined differently in this case. We can identify a filter expression for this traffic and apply a specific color filter for these ahead of previous direction filters so that broadcasts have a higher priority. Keep in mind that one frame can only have one color rule applied. Color rules are applied in order; once a color rule matches, the following ones will be ignored. The filter expression for broadcast would be:

Ethernet.DestinationAddress == FF-FF-FF-FF-FF-FF ||
Property.WiFiDestination == FF-FF-FF-FF-FF-FF ||
IPv4.DestinationAddress == 255.255.255.255 ||
IPX.DestAddress == 0xFFFFFFFF

Color Rules from CodePlex

Colors rules, along with standard filters and parsers, are included as part of each CodePlex parser release. We’ve included this filter with the latest parsers from CodePlex. You can enable the Network Direction color rule from the new Color Rules button in Network Monitor 3.4.

clip_image002

When the Color Rules window opens, select the Open button drop down and you’ll find Network Direction in the Default Sets folder.

clip_image004

These rules will be added to the top or bottom of the list depending on the checkbox setting for appending new rules.

clip_image006

Color Filters for Clarity

So here's an example of broadcast traffic represented with a pinkish background, and traffic in one direction represented by a yellow background. In this example, NetworkDirection==2 was not changed.

clip_image008

Using color filters to make traffic direction stand out is a great way to help understand traffic flow. Hopefully this is a feature you find useful and learn to incorporate into your troubleshooting techniques.

Using High Performance Filtering

$
0
0

There are certain scenarios where the High Performance Filtering feature added in Netmon 3.4 will provide the best performance for capturing with a filter. The idea is to filter frames before they hit the disk which can improve your performance by reducing the impact on the capturing machine. High Performance filtering can be performed with both the UI and NMCap. However, if performance is a concern, NMCap provides the best capturing performance overall. The following provides some guidance on when and how to use this feature. You should also consider reading our Avoiding Dropped Frames blog as background.

This article is split into 3 general areas. The first few sections describe what High Performance mode is. The next sections following that describe when to use the feature. And finally the last few sections describe how to use the feature in more detail.

Capture Filter Overview

First let’s review the various capture filtering methods:

Capturing Mode

Description

Usage

Normal Buffered Capturing (Default Mode)

Frames are buffered to disk first and then evaluated afterwards. If your machine is unable to keep up with incoming traffic, your pending frame count will continue to grow. But as we buffer frames first, it’s much better at handling bursts of network traffic.

Any capture filter with at least one NON fully qualified data field or property.

High Performance Profile

This is an optimized parser set which provides better performance, but with less depth in terms of fields you can parse on. This profile is automatically used with High Performance Capture filters, but can also be used with normal buffered capturing.

You can enable this parser set using the /UseProfile with NMCap or selecting the profile in the UI. You should also turn off conversations for best performance. With NMCap this means adding the /DisableConversation switch.*

High Performance Capturing

Frames are evaluated before buffering as long as we detect we can keep up. This saves a write to the disk and uses a high performance filter to evaluate the frames.

Any capture filter where ALL fields are fully qualified and supported. See High Performance Capturing in the help file for a full list. Also reference the NM34 High Perf Filters in the standard filter list under the Load Filter drop down button.

*Note: That in general you can increase the performance of NMCap by disabling conversations and not enabling process tracking. Conversations are enabled by default when any filter is used, even in StopWhen/StartWhen filters. If you can verify conversations are not really needed then explicitly add the /DisableConversation switch. Conversations are not needed for TCP, IP or Ethernet data fields. Process tracking is disabled by default.

General Limitations of Capturing

There are 3 external factors which contribute to the ability for your machine to keep up and filter incoming network traffic.

  • Network – Obviously the greater the load/speed of your network the greater the chance that you have of dropping frames or filling up your driver with buffered frames. Also the type of traffic can affect your ability to keep up with incoming packets.
  • CPU - The speed and load on your CPU affect the ability to filter traffic. The faster your CPU, the more packets you can parse per second.
  • Disk – In the case where frames are being buffered and then written to disk, the load and speed of your disk can impact your ability to keep up with incoming packets.

What is High Performance Capturing

High Performance filtering is composed of two separate optimizations.

  1. With specific fully qualified fields, like Frame.Ethernet.Ipv4.TCP.Port, packets can be filtered out before they are buffered to disk. This avoids a disk write can lower CPU/disk load. However if we detect we can't keep up, we revert to the buffering behavior. A type of "flow control" is used to control when we revert and when we can continue filtering before we buffer.
  2. Second, we include a high performance parser profile which is automatically used when High Performance filters are used. While this parser set is limited in detail, it is able to parse more frames per second than the Default or Windows Profile.

When to Use High Performance Filtering

The range of system capturing performance, network throughput and network burst creates a huge matrix of possibilities. Rather than trying to provide guidance in terms of the types of machines and networks in which this feature will benefit, we instead provide a list of conditions to explain when and when not to use the various components of the feature.

For the most part, this feature is targeted at cases where you have a busy server with high traffic and you want to capture a narrow slice of the total network traffic. In cases where we can avoid buffering to disk, we lower the disk load. And in the case where we use the High Performance Profile we lower the load on the CPU.

When NOT to use High Performance Filtering

If you are capturing a majority of the traffic you get the best performance using NMCap with no filter. That includes StopWhen or StartWhen filters as these will cause the parsing engine to engage. By writing directly to disk we don’t have to evaluate frames and therefore we don’t have to read and write the capture after a filter evaluation.

When to Use High Performance Filtering and NMCap

When using High Performance Filtering you should meet all of the following conditions. The main difference in this scenario when compared to the next is that we cannot handle network bursts as reliably. When filtering before we buffer to disk, all buffering is done in memory and this is a limited resource when compared to your disk space. For more information on configuring this buffer please see the Avoiding Dropped Frames blog.

  • CPU Can Keep Up with Traffic - The ability to keep up with incoming frames depends on the pace at which frames are arriving and the ability for your CPU(s) to keep up with that traffic. The complexity of the filter also affects how well a CPU can keep up. See the “How To Optimize Your Filter” section below.
  • Filtering a Large Portion of the Traffic - The more you filter out, the less you have to write to disk. By skipping the disk buffer path, you can gain more performance. The greater number of frames you can filter out, the less work your system has to do.
  • Supported Filter – There is a subset of fields that are supported, mainly address and ports. So your filter has to be supported in high performance filtering. To see a complete list, please see the “High Performance Capturing” section in the help.
  • /Startwhen or /Stopwhen NOT Required - These triggering options are still done after the frames are buffered. If you set a capture filter which blocks the triggering conditions, they will never be hit. You can mitigate this problem by including the trigger filter in your capture filter. It makes the most sense to filter as narrowly as possible so that as few as possible extraneous frames are let through.

When to Use High Performance Profile Only and NMCap

You can also separately specify the High Performance Profile when running NMCap with the /UseProfile switch. Using this high performance filter set alone, you are able to get better capture performance than the default configuration.

  • CPU CANNOT Keep Up with Traffic Bursts - A loaded CPU(s) or network burst can affect the ability to keep up with traffic. In this case it might be better to buffer packets to make sure we don't drop any. By using the High Performance Profile explicitly you get maximum parser performance. See the “How to Optimize Your Filter” section below. And indication that your CPU can’t keep up is a constant increase in the pending frames counter.
  • Filter Supported by High Performance Filter Profile - In order to optimize the High Performance filter profile, we removed many higher level parsers. The filter will simply fail if it is not supported. However, because we restrict the fields in the standard High Performance Filtering, there is greater flexibility in what filters you can specify by using the profile alone. For instance, you can filter on TCP.Flags.Reset using the High Performance Profile but you cannot use this as a High Performance Filter.

When to Use High Performance Filtering in the UI

When you are using the UI and want to view traffic live, you can use a fully qualified filter to restrict the amount of traffic you are looking at. But you might still want to use the full parsing to view the data. For this scenario here are the conditions which would warrant the High Performance feature.

  • Not Concerned with Impacting Server -The UI has a larger footprint in terms of CPU and Memory Load. If you are concerned at all with impacting the capturing machine, you should use NMCap instead.
  • Not a Long Term Capture Session - Related to the previous condition, capturing in the UI for long periods of time might consume some many resources that the UI becomes unresponsive and you might not be able to save the resulting trace. As the UI collects frames forever, there is no way to release any resources related to the incoming trace information.

How to Optimize Your Filter (Weaker Filters)

As mentioned in the Avoiding Dropped Frames blog, the filter complexity affects the CPU load. And one possible solution is to use “weaker” filters. The type of filter you use affects your filtering speed. The more complex and deep a filter is, the longer it takes to parse a frame. For instance, parsing all the way to TCP.Port takes a lot longer parse than Ethernet.Address. But first let’s determine how to tell if you need to optimize your filter.

The first step is determining if your machine can keep up based on a specific filter. The only way to know for sure is to run NMCap using a high performance filter during the type of network traffic you expect to see. Then view the statistics that are printed out and view the Dropped and Pending frame counters.

  • If the Pending count continues to grow over time, then this is an indication your CPU cannot evaluate the filter quickly enough to keep up with the incoming traffic. It might also be an indication that the default settings for flow control are not ideal for your machine.
  • If the Drop count rises, then this is an indication that either your CPU can’t keep up or that the High Performance mode is unable to account for and detect the inability to keep up and automatically switch to back to buffering the packets to disk.

For either of these cases, it might be possible to optimize the buffers and flow control to avoid dropping frames or make sure you stay in high performance mode rather than reverting back to buffering. See the “Modifying the High Performance Buffer Options” below for more details.

Picking a Simpler Filter

In some cases you can provide a simpler filter to achieve what you need. For instance, if you are filtering on an IPv4 or IPv6 address, it might be possible to use an Ethernet address instead. Ethernet is simpler to parse and therefore faster.

Also keep in mind that it does take longer to evaluate multiple fields than just one. While you might not have an option to simplify in this manner, this might be a place you can consider simplifying. For instance, rather than looking for both the source and destination addresses you might be able to get away with looking for only the destination. Since the source is often your machine, and traffic to the capturing machine assumes your address, it would be better to use Frame.Ethernet.DestinationAddress rather than Frame.Ethernet.Address.

Using Blob to Filter a Pattern Offset Match

If there is a filter that is not supported using High Performance mode, or you need even higher performance, you can use a pattern offset filter. The caveat here is that you have to validate if your pattern and offset is accurate for the scenario you are capturing. For instance, if you are filtering on a TCP port with the ESP protocol in the stack, there might be situations where IP options move the location and block traffic that you'd normally capture with a normal filter. You must evaluate the possibility that you might drop frames you'd normally capture and understand the impact of missing these frames. If ANY of these following conditions are true, then you might consider this option.

  • Filtering Speed Still too Slow - A simple pattern offset can improve performance by an order or magnitude or two.
  • High Performance Filter not Supported - In some cases, the high performance filter doesn't include a protocol, for instance ESP. In that case, it might be possible to rely on a pattern offset filter to perform high performance filtering for these cases.

To create a filter using the Blob, you need to know the offset and length of the pattern you are matching. Often, the simplest way to do this is open a trace you’ve taken from the network you are interested in, and click on the field in question. Then look in the hex details for that location and offset.

clip_image002

For instance, if you wanted the IPv4 Destination Address you would see that on an Ethernet network that this is at offset 30 and the size is 4 octets. So the filter would be Blob(FrameData, 30, 4) == X. For X you would have to determine the 4 octet value that represents your IP address. Again if you already have a trace, looking at the hex details on an example filter is one easy way to find out what that value is. But if need be, you’ll have to translate the value to hex. If in this example the address was 1.2.3.4, the associated hex value is 0x01020304. So your blob filter would be:

Blob(FrameData, 30, 4) == 0x01020304

With most high performance filters you must preface them with a fully qualified path. But Blob is the one exception. If it is used as a capture filter in the UI or NMCap, and the rest of the filter string qualifies, then the filter will be attempted using the high performance mode.

High Performance Filtering to the Rescue

As networks continue to get faster, capturing that traffic becomes more and more challenging. With this new feature you have the ability to filter before you buffer to disk thus giving you the flexibility to control the load on your system. This can give you the ability to capture faster traffic with a lower tax on the capturing system.

Trouble Accessing Some Fields with API

$
0
0

With the Network Monitor API you can access any field by adding its path and then accessing the offset, size or value using one of the Field Value Functions like NmGetFiledOffsetAndSize or NmGetFieldValueString. But for certain paths this does not work properly. In this blog we’ll discuss how to work around this problem.

Adding Fields to the Parser

To review, let’s first discuss how you access fields with the API. To provide an alternate high level view, let’s look at this diagram:

clip_image002

The green box at the end is our goal; at this point we can see the values associated with a data field for a specific frame. But to get to this point, we need to create a Frame Parser and apply it to a Raw Frame. The orange box represents different ways you can produce a Raw Frame handle. The Blue box describes the main steps needed to create a Frame Parser.

During this process you create a Parser Configuration handle which can optionally be optimized to look at specific fields, filters, and properties (white box). This optimization can be overridden, but for fastest parsing you’ll want to pass NmFrameParserOptimizeOption.NmParserOptimizeFull as the last parameter to NmCreateFrameParser (purple box). In either case, a Frame Parser can be used to break apart a Raw Frame object.

And now with a Raw Frame and a Frame Parser we can access the data in a given frame. We can also evaluate filters against the frame and access properties from the frame parser. As another reference, this blog has more details about the API. Also the examples in the help file, in particular the “Iterating Frame Field Data” is a good reference for this article.

With all this talk of Handles, it’s good to bring up that each should be closed after you use them by calling NmCloseHandle. A clue that something has gone wrong in this regard is that you might get errors after iterating 1000 frames, as this is the default number of open raw and parsed frame handles we allow.

Determining the Path to Add to NmAddField

To access an IPv4 source address, you would add “IPv4.SourceAddress” as a data field argument to NmAddField. In some cases the data field you need to add isn’t so obvious. When you have a problem discovering the path you can use what is returned from right clicking in the UI and selecting “Add Selected Value to Display Filter”. And most of the times this works great.

The Problem

But in some cases, the path returned by right clicking does not work when trying to call one of the NmGetField type calls or NmGetParsedFieldInfo. What you’ll find is that NmAddField works successfully, and the calls return without an error. But the results, such as its offset, size and value, will be zero. Or in the case of NmGetParsedInfo, the data structure returned is populated with zeros.

As it turns out, the culprit here is how the parser code (NPL) is defined. In almost every case in our parsers the instantiated object has the same name as the data type. So in our parsers we could have a protocol defined as follows:

Protocol ProtX 
{
UINT32 yyy;
}

And then at some point it can be instantiated in another protocol or structure, for instance:

ProtX ProtX;

In this case, adding the field as provided by the local path to the protocol path works perfectly. So when you call NmAddField, the following statement can be accessed just fine.

NmAddField(myFrameParser, “ProtX.yyy”, &myID);

However, there are a few instances in our parser code where this convention is not followed. The instance that prompted me to write this blog was in the ETL parsers for NDIS events. Instead we did something like this:

ProtX myProtX;

And while this is perfectly legal, this confuses our API and causes it not to work correctly. In fact a perfectly legal path you can use in the UI is ProtX.yyy. At the top level we use the protocol type definition as the root of a data field name. But for the API, referencing the field this way doesn’t work properly and API calls returns with zeroed out information as mentioned above.

The Workaround

The solution is to reference the protocol or structure using a parent path object you started with above. For instance if the instantiation was as follows:

Protocol Frame
{
ProtX myProtX;
}

You can reference the data field of yyy using Frame.myProtX.yyy when you call NmAddField.

Difficult Decisions

When delivering a product, you often have to make conscious difficult decisions about which bugs to fix and which you choose to let in the wild. One of the deciding factors in this case was if there is a work around. And in this case it’s not a difficult work around, but only if you know there’s a problem to begin with.


Network Monitor Freezes While Loading Capture

$
0
0

If you encounter a situation where Network Monitor freezes while opening a capture file, try updating to the latest parsers from the CodePlex Parser Site.

A parser issue with SMTP traffic causes the engine to get in a state where we get stuck in a loop. But fortunately this is easy to fix by installing a patched SMTP parser which resolves the problem. Be sure to check the CodePlex site for updates as we try to update them monthly.

Marking Frames with Network Monitor 3.4

$
0
0

Marking frames is a convenient way to temporarily flag a location in the trace you wish to keep track of during a troubleshooting session. But there is no built in way to mark frames in Network Monitor 3.4. However, using frame comments, coloring rules, and AutoHotkey, you can implement frame marking functionality.

How it Works

Color rules can be created using any general filter. This includes filtering the frame comment title which is exposed using the property FrameVariable.CommentTitle. By appending some text to the comment title, for example “m:red”, we can create color rules that display a color based on that text. What makes this seamless is AutoHotkey’s ability to read and control and UI by running scripts based on the keystrokes we define.

The Setup

Following these three simple steps will allow you to mark frames with Network Monitor 3.4.

  • Install AutoHotkey (it’s free) – (http://www.autohotkey.com/) If you haven’t used this tool before, you’ll be surprised at all the cool things it can do. You might find some other clever ways to automate your computer if you decide to become familiar with AutoHotkey and its scripting language. But this knowledge is not necessary in order to implement color marking with Network Monitor 3.4.
  • Download and Run AutoHotKey Script - Once you have AutoHotkey installed, you can download the AutoHotkey script I created. Once it’s downloaded, you can double-click it to run, as it should be automatically associated with AutoHotkey. Once it runs, you’ll see the AutoHotkey icon running in toolbar section of the taskbar. Now, it’s ready to look for the defined key strokes and should appropriately add strings to the comments for the frame you have selected. But we still need to setup the color rules so Network Monitor knows how to interpret those comment identifiers. If you want this to run every time you reboot your machine, you can place a shortcut to this file in your “Start Programs”.
  • Download Color Rules - The final step is to download this color rule file and import it into Network Monitor 3.4. Place this file into the Color Rules folder under Network Monitor 3 in your documents folder. Then open Network Monitor 3.4 and open a capture file. Click the Color Rules button which will open up the Options dialog for Color Rules. Make sure the “Always append new rules” is NOT selected so that the newly imported rules appear at the top of the list. This will give them the highest priority. Then select Open, My Sets, and click the MarkingColors set that you just copied.

clip_image001

As you can see, there are 6 color rules defined to identify various strings; m:cyan, m:orange, m:purple, m:green, m:yellow, m:blue and m:red. When the appropriate key strokes are hit, the comment title is modified to add the related text. This triggers the associated coloring rule based on the first match in the list above.

Marking Frames with Shift-F1

Now with the previous steps completed, you will be able to select one frame and mark it. For instance, to mark a frame with the first defined color, just press Shift+F1. This should make the frame show up with a red background. Pressing Shift+F2 will override the color and change it to blue. The associated comment will now end with m:blue. You can also remove any comment color tags by using Shift+F12 or the original keystroke that marked the frame.

I’ve also enabled a multi-level color marking scheme. By using Ctrl+F1, you still get a Red colored frame. But when you press Ctrl+F2, it appends m:blue and since that has a higher priority in the color list the frame is displayed with a blue background. Then by pressing Ctrl+F12, you can revert to the previous color. It will remove that last applied color and leave the m:red portion of the text in the comment. At this point the frame will revert to a red background.

If you’ve applied multiple levels of colors using the Ctrl+Function Key, you can use Shift+F12 to remove them all. Also all of these comment additions should not affect any preexisting comments you have created, unless there’s some text resembling the “m:color” type identifiers I used.

Color Marks the Spot

Marking frames can provide an easy way to track interesting parts of a trace as you navigating your way through complex network traffic. You can even jump to the next marked frame by looking for “m:” in the Find dialog (Ctrl+F) with the filter CommentTitle.Contains(“m:”). And while it’s not perfect, for instance you can’t select multiple frames and mark them all, it does provide a simple way to mark frames with color using key strokes.

Reassembly Made Easier

$
0
0

By using our latest 3.4.2455 release of the parsers and using a simple filter, you can now view reassembled traffic more easily for certain protocols. Normally when you reassemble a trace you see all the original frames plus the newly inserted reassembled frames. Using a filter with a brand new property, you can now see only the complete frames; whether or not they were fragmented to begin with.

Overview of Reassembly

For a full explanation of Reassembly, you can check out our Reassembly Blog. In short, the resulting reassembled view of the data contains both the original frames and fragments, and the newly inserted frames for each protocol that might fragment its payload. For instance, all the TCP frames that have been fragmented are reassembled and inserted into a new frame in the trace with a special header, PayloadHeader. Additionally if those reassembled TCP frames are also fragments at a higher layer protocol, like HTTP, new frames are also inserted for those. The result is Frame Summaries that seem to show as duplicates, but they are similar versions with a complete payload.

New Properties for Filtering

If you are interested in complete frames for a specific layer, it would be convenient to filter out all the fragments. To make this possible, we’ve added new properties into the parsers which tag frames that are complete for that protocol. By filtering on this property you can see all the complete frames for the layer in question. We have enabled this property for TCP, NBTSS, and HTTP. NBTSS is important because SMB uses it as a transport.

An Example of HTTP Fragmentation

If this first example, I’ll filter on HTTP traffic before it’s reassembled. Notice the last shaded frame below, it is a payload and thus an HTTP fragment of the frame above it. To see the entire HTTP frame we need to reassemble this trace.

HTTP:Request, GET /ads/7945/0000007945_000000000000000607478.gif

HTTP:Response, HTTP/1.1, Status: Ok, URL: /ads/7945/0000007945_000000000000000607478.gif - GIF: Version=GIF89a, Width=300, Length=250

HTTP:Request, GET /ads/7945/0000007945_000000000000000607449.gif

HTTP:Response, HTTP/1.1, Status: Ok, URL: /ads/7945/0000007945_000000000000000607449.gif - GIF: Version=GIF89a, Width=300, Length=250

HTTP:HTTP Payload, URL: /ads/7945/0000007945_000000000000000607449.gif - GIF: Version=


tÀ„‑, Width=8195, Length=49568

After reassembly, we filter on HTTP. Now you see the two sets of original traffic paired with their inserted responses (shaded in two colors below). The newly inserted responses are also bolded.

 

Summary Description

Notes

HTTP:Request, GET /ads/7945/0000007945_000000000000000607478.gif

Original Frame

HTTP:Response, HTTP/1.1, Status: Ok, URL: /ads/7945/0000007945_000000000000000607478.gif - GIF: Version=GIF89a, Width=300, Length=250

Original Frame

HTTP:Response, HTTP/1.1, Status: Ok, URL: /ads/7945/0000007945_000000000000000607478.gif - GIF: Version=GIF89a, Width=300, Length=250

Inserted TCP frame from TCP Fragments

HTTP:Request, GET /ads/7945/0000007945_000000000000000607449.gif

Original Frame

HTTP:Response, HTTP/1.1, Status: Ok, URL: /ads/7945/0000007945_000000000000000607449.gif - GIF: Version=GIF89a, Width=300, Length=250

Original Frame

HTTP:Response, HTTP/1.1, Status: Ok, URL: /ads/7945/0000007945_000000000000000607449.gif - GIF: Version=GIF89a, Width=300, Length=250

Inserted TCP Frame from TCP Fragments. Also the start of an HTTP Fragment

HTTP:HTTP Payload, URL: /ads/7945/0000007945_000000000000000607449.gif

Original Frame, HTTP End Fragment

HTTP:Response, HTTP/1.1, Status: Ok, URL: /ads/7945/0000007945_000000000000000607449.gif - GIF: Version=GIF89a, Width=300, Length=250

Inserted HTTP Fragment

As you can see this is confusing because each bolded inserted frame is actually a duplicate. There are 3 inserted frames because of fragmentation at both the TCP and HTTP layers. Due to this the final response is there two extra times: the first time because TCP has fragmented the data and the second because HTTP has also fragmented its payload.

Now perhaps you might think that I could simply filter out only those frames that have been reassembled. In the table below I’ve done that using the filter “PayloadHeader”

HTTP:Response, HTTP/1.1, Status: Ok, URL: /ads/7945/0000007945_000000000000000607478.gif - GIF: Version=GIF89a, Width=300, Length=250

HTTP:Response, HTTP/1.1, Status: Ok, URL: /ads/7945/0000007945_000000000000000607449.gif - GIF: Version=GIF89a, Width=300, Length=250

HTTP:Response, HTTP/1.1, Status: Ok, URL: /ads/7945/0000007945_000000000000000607449.gif - GIF: Version=GIF89a, Width=300, Length=250

And you can see this isn’t helpful because not all frames are fragmented. So now let’s use the new Property and apply a filter:

Property.HTTPCompleteFrames == 1

HTTP:Request, GET /ads/7945/0000007945_000000000000000607478.gif

HTTP:Response, HTTP/1.1, Status: Ok, URL: /ads/7945/0000007945_000000000000000607478.gif - GIF: Version=GIF89a, Width=300, Length=250

HTTP:Request, GET /ads/7945/0000007945_000000000000000607449.gif

HTTP:Response, HTTP/1.1, Status: Ok, URL: /ads/7945/0000007945_000000000000000607449.gif - GIF: Version=GIF89a, Width=300, Length=250

And Voila! We now see only complete HTTP frames and the trace we actually wanted.

To dive into how this works. The parser marks each complete frame as it reads it. Since the Requests were never fragmented, they’re marked as complete and display. The Responses were fragmented and in the case of the last frame, only the last HTTP reassembled frame is considered a complete HTTP frame. The corresponding TCP reassembled frame is not displayed because from HTTP’s perspective, it wasn’t complete.

NBTSSCompleteFrame Property

We have enabled the same functionality for NBTSS. Since SMB relies on this as a transport, this makes viewing SMB traffic over NBTBSS also display in the same way as the HTTP example above.

TCP Fragmentation

In some cases a protocol is fragmented by TCP, but no further fragmentation occurs in the protocol layers above. In this case we don’t know what a complete frame is at the protocol layer, but you can use a property at the TCP layer to find all completed TCP frames. When you add this to those frames that have a payload header, this will provide you with a list of all complete TCP frames or frames which were originally fragmented. This filter would look like this:

Property.CompleteFrame == 1 OR PayloadHeader

For instance, for SMB2 traffic over TCP, this does a decent job of showing you the traffic you. The only caveat here is that since SMBOverTCP can fragment data, you see those extra fragments. Still the results are much easier to look at that the unfiltered reassembled trace.

Here’s an example of a reassembled SMB2 trace where you can see the original READ plus the inserted reassembled frame.

SMB2:C READ (0x8), FID=0xFFFFFFFF00000041 (data\WebGuide4_4109-VISTA.exe@#4848) , 0x7600 bytes from offset 4296704 (0x419000)

SMB2:R READ (0x8), File=data\WebGuide4_4109-VISTA.exe@#4848, 0x7600 bytes read

SMB2:R READ (0x8), File=data\WebGuide4_4109-VISTA.exe@#4848, 0x7600 bytes read

SMB2:C READ (0x8), FID=0xFFFFFFFF00000041 (data\WebGuide4_4109-VISTA.exe@#4848) , 0x8000 bytes from offset 32768 (0x8000)

SMB2:R READ (0x8), File=data\WebGuide4_4109-VISTA.exe@#4848, 0x8000 bytes read

SMB2:R READ (0x8), File=data\WebGuide4_4109-VISTA.exe@#4848, 0x8000 bytes read

SMB2:C READ (0x8), FID=0xFFFFFFFF00000041 (data\WebGuide4_4109-VISTA.exe@#4848) , 0x8000 bytes from offset 73728 (0x12000)

SMB2:R READ (0x8), File=data\WebGuide4_4109-VISTA.exe@#4848, 0x8000 bytes read

SMB2:R READ (0x8), File=data\WebGuide4_4109-VISTA.exe@#4848, 0x8000 bytes read

SMB2:C READ (0x8), FID=0xFFFFFFFF00000041 (data\WebGuide4_4109-VISTA.exe@#4848) , 0x2000 bytes from offset 65536 (0x10000)

SMB2:R READ (0x8), File=data\WebGuide4_4109-VISTA.exe@#4848, 0x2000 bytes read

SMB2:R READ (0x8), File=data\WebGuide4_4109-VISTA.exe@#4848, 0x2000 bytes read

Now using the above filter you see all the SMB Read fragments disappear.

SMB2:C READ (0x8), FID=0xFFFFFFFF00000041 (data\WebGuide4_4109-VISTA.exe@#4848) , 0x7600 bytes from offset 4296704 (0x419000)

SMB2:R READ (0x8), File=data\WebGuide4_4109-VISTA.exe@#4848, 0x7600 bytes read

SMB2:C READ (0x8), FID=0xFFFFFFFF00000041 (data\WebGuide4_4109-VISTA.exe@#4848) , 0x8000 bytes from offset 32768 (0x8000)

SMB2:R READ (0x8), File=data\WebGuide4_4109-VISTA.exe@#4848, 0x8000 bytes read

SMB2:C READ (0x8), FID=0xFFFFFFFF00000041 (data\WebGuide4_4109-VISTA.exe@#4848) , 0x8000 bytes from offset 73728 (0x12000)

SMB2:R READ (0x8), File=data\WebGuide4_4109-VISTA.exe@#4848, 0x8000 bytes read

SMB2:C READ (0x8), FID=0xFFFFFFFF00000041 (data\WebGuide4_4109-VISTA.exe@#4848) , 0x2000 bytes from offset 65536 (0x10000)

SMB2:R READ (0x8), File=data\WebGuide4_4109-VISTA.exe@#4848, 0x2000 bytes read

Enabling Additional Layers

An advantage of extensible parsers is that we can continue to deliver functionality and improvements without having to release a new version of Network Monitor. If there are other layers you would like to see enabled, please feel free to comment below. And remember to use these properties when looking at Reassembled traffic.

New Videos for Advanced Filtering and 3.4 UI Features

$
0
0

You folks have been asking for updated videos about filtering and now I’ve made two more available. These include information about filtering with properties to understand how to access TCP and SMB values that don’t appear directly on the wire. We talk about operands and how to filter out traffic you don’t want to see. We discuss subnet filtering and how to use the pair property to easily filter between source/destination ports and addresses. Additionally there’s another new video to discuss some new UI features in Network Monitor 3.4.

The new videos are:

I have also updated the Network Monitor Usage Video page as well as the original NM3 TV Blog. Hope you find these helpful and Enjoy!

Filtering On Timestamps

$
0
0

There are situations when you want to narrow a trace down to a certain time frame. However, creating a filter for a timestamp is not very straight forward. We will discuss how timestamps operate and ways to make filtering on timestamps workable.

How Time Stamps Work

With the latest version of Network Monitor 3.4, there are now two different ways timestamps are stored. In the previous capture file format, there is a master timestamp in the file header and then each frame records an offset from that initial time. In the latest 3.4 release, we extended our capture file format to save a higher-resolution timestamp and time zone information per frame. This feature allows you to get an adjusted view of the timestamps and associate trace data to other logs, such as the event log, which also adjusts the time based on your local time zone. The 3.4 file format is still backwards compatible though; however, you won’t be able to access the time zone information. Our help file has all of the details of the file format if you need more information.

You can determine which version of a capture file you are looking at be by going to the File Menu and selecting Properties on an open trace. It will state the version as well as the time zone information where the trace was originally taken.

Here’s an example of the older format with no time zone information. We can only see the local time of the trace based on when it was taken.

clip_image001

Here’s an example of capture file with time zone information.

clip_image002

Where Are the Time Properties Stored?

We store all Frame related metadata in a top level object called FrameVariable. Within this object you can access all the time related properties as well as many other frame level properties like Frame Length and Media Type. The time related properties we will discuss are listed below:

  • FrameVariable.TimeOffset – This is the offset based on the initial time stamped in the capture header.
  • FrameVariable.TimeDelta – The distance in time from the last physical frame in the trace.
  • FrameVariable.TimeDateLocalAdjusted – Time and Date in the trace adjusted from the time zone where it was taken to your local time zone.

Filtering on Time Offset

Out of all the examples here, Filtering on Time Offset is the most straight forward. The only trick is that the value we use is in 10ths of microseconds while the value we usually display is in fractions of seconds. So if you type in 10,000,000, this really represents 1 second. To filter on all frames between 10 and 20 seconds from the beginning you would type:

FrameVariable.TimeOffset > 100000000 AND FrameVariable.TimeOffset < 200000000

Filtering on Time Delta

This value is also represented in 10ths of microseconds. But the trick with Time Delta is that it’s based on the last physical frame. I discuss this in some detail in this blog about measuring response times. Just remember that if you have a filter applied, the time delta is still based on the last physical frame and not the last one displayed based on your filter. As an example, the following filter finds all frames where the time delta form the last physical frame is 2 seconds.

FrameVariable.TimeDelta > 20000000

Filtering on Time of Day

You can’t filter successfully using a time/date string for any of our time fields. While it would be the natural thing to do, we never implemented a way to convert a time string within a filter due to development constraints. For Time and Date, we instead use the FileTime which is an operating system structure which records the number of 100-nanosecond intervals since January 1, 1601 (UTC). So in order to find the numeric value you need, you have to convert the date into this 64-bit number.

One way to do this is find a frame you know the time of and use it to generate the filter by using Right-Click add as display filter on the Time of Day column. Keep in mind that this is the only column that we’ve enabled this translation for. All other time/date related columns, Time And Date, Time Date Local Adjusted, and Time Local Adjusted, are represented as strings incorrectly and create a filter that shows a string value instead. Obviously this wasn’t the intended behavior, but rather the default behavior for any string data in a column.

Another way to get the value is to convert the time to a FileTime value manually. This might be more useful if it’s difficult to find an example frame to use as a reference. There are actually some web sites which can do this for you; in particular I found this site: http://silisoftware.com/tools/date.php.

Alas, you still have to do some conversion. Since the FileTime is based on UTC you have to subtract the time zone where the trace was captured based on the difference from GMT. So for instance if a trace is taken in EST which is -5 form GMT, I have to subtract 5 hours. For example if I have a timestamp of 11:16:35 AM 3/5/2010, I would need to enter into the above web page, “March 5, 2010 6:16:35AM”. When I do this, it returns a FileTime value of 129122613950000000. Which I can then plug in as a filter:

FrameVariable.TimeOfDay > 129122613950000000

Filtering On Time Date Local Adjusted

Filtering on this property is done as a string. And due to how string comparison work, the time formats don’t always filter correctly. For instance while “10:50:51” < “10:50:52” makes sense, it’s also true that in terms of strings comparisons that “1:50:51 > 10:50:51”. This is because the comparison is strictly based on the ASCII values. So what I recommend is that you use the FrameVariable.TimeOfDay property instead which is still available for 3.4 captures. In this case you can add the column to find the local time, or calculate it manually based on the time zone information shown in the file properties dialog.

Understanding Network Monitor Time Stamps

Filtering on times can be helpful when you want to narrow a large trace based on a time period. In fact you can also use, “NMCap /InputCapture x.cap /capture “FrameVariable.TimeOfDay > xxx /file:out.cap”, to automate this process when you have many traces to look through. Hopefully you now have an understanding for how to filter with Network Monitor timestamps.

Windows Phone 7 Connectivity Issues and Smart Potato

$
0
0

Smart Potato is an application that allows you to access and manage your Media Center recordings and stream recorded TV. But when I first attempted to test this out on my phone, it complained that it could not reach the server. Detailed below is how I used Network Monitor to troubleshoot this issue.

My Setup

In my case I simplified and tested this from within my private wireless network. The Smart Potato app has a settings section which allows you to change, among other things, the server URL. So I modified this field to contain the private address for my Media Center, http://192.168.1.7. When I attempted to connect from Smart Potato, I received an error message that I could not find the server.

Getting a Trace

I started a network trace on my Media Center PC. I assumed all traffic should be directed there, but I suppose it’s possible this was not the case. However it is easy to start by sniffing at the Media Center PC and if I didn’t see traffic I could move up to the router. I also looked at my router to see what my Windows Phone 7 IP address was. I could also have looked in the Setting on my phone for my wireless connection as it shows up there as well.

Using this IP address I applied the following Display Filter so that I would only see traffic from or to my phone.

IPv4.Address == 192.168.1.8

I could have instead used a capture filter because there might have been other traffic that emanated from my Media Center to another service that might be relevant. If I instead used a capture filter, I would have to rerun my test to capture this data. But with no capture filter I still capture everything, and narrow down my view by applying various display filters after the fact.

Starting My Test

So the first thing I noticed when I started my capture is that I saw a TCP Syn request on port 80 from my phone. But the traffic looks like this:

 

Source

Destination

Description

192.168.1.8

192.168.1.7

TCP:Flags=......S., SrcPort=50914, DstPort=HTTP(80), PayloadLen=0, Seq=1438303684, Ack=0, Win=8192 ( Negotiating scale factor 0x8 ) = 8192

192.168.1.8

192.168.1.7

TCP:[SynReTransmit #2]Flags=......S., SrcPort=50914, DstPort=HTTP(80), PayloadLen=0, Seq=1438303684, Ack=0, Win=8192 ( Negotiating scale factor 0x8 ) = 8192

192.168.1.8

192.168.1.7

TCP:[SynReTransmit #2]Flags=......S., SrcPort=50914, DstPort=HTTP(80), PayloadLen=0, Seq=1438303684, Ack=0, Win=8192 ( Negotiating scale factor 0x8 ) = 8192

What we see is that the Media Center PC never responded to the request. Each subsequent request is flagged with the SynReTransmit keyword. If I wanted to look for these specifically, I could use a filter of:

Property.TCPSynReTransmit

For this traffic, I concluded that either the service was not listening on port 80 or perhaps it was getting blocked by my firewall. I enabled and then checked my firewall logs and saw that nothing was logged, so my firewall was letting that traffic through.

Next I used NetStat –ano, which list out all ports in use and their associated process Ids.

C:\Windows\System32\drivers\etc>netstat -ano
Active Connections
Proto Local Address Foreign Address State PID
TCP 0.0.0.0:135 0.0.0.0:0 LISTENING 848
TCP 0.0.0.0:445 0.0.0.0:0 LISTENING 4
TCP 0.0.0.0:554 0.0.0.0:0 LISTENING 4200
TCP 0.0.0.0:954 0.0.0.0:0 LISTENING 5944
TCP 0.0.0.0:2869 0.0.0.0:0 LISTENING 4
TCP 0.0.0.0:3389 0.0.0.0:0 LISTENING 1292
TCP 0.0.0.0:3390 0.0.0.0:0 LISTENING 1292
TCP 0.0.0.0:5357 0.0.0.0:0 LISTENING 4
TCP 0.0.0.0:9080 0.0.0.0:0 LISTENING 4
TCP 0.0.0.0:10243 0.0.0.0:0 LISTENING 4



As you can see, there was no process listening on port 80. Yet I did see port 9080, which as it ends with 80, made me think the service might be using a different port. And sure enough, I ran the Remote Potato configuration tool, which is the service on the Media Center PC that shares the streaming data, and I saw that the port is indeed set to listen on port 9080.

Fixed Port, but Now What?

I then modified the URL to say “HTTP://192.168.1.7:9080”. This will direct the URL to use an alternate port instead of the default of 80. And when I checked Smart Potato after a reinstall, I found that the default string which I hastily replaced also ended with 9080. I suppose I could have paid more attention when changing the original URI.

After updating the port I tried again, but this time it reported bad username or password. I decided that since I had everything setup, I would trace again. This is the traffic I saw.

 

192.168.1.8

192.168.1.7

TCP:Flags=......S., SrcPort=53492, DstPort=9080, PayloadLen=0, Seq=793206766, Ack=0, Win=8192 ( Negotiating scale factor 0x8 ) = 8192

192.168.1.7

192.168.1.8

TCP:Flags=...A..S., SrcPort=9080, DstPort=53492, PayloadLen=0, Seq=4192041810, Ack=793206767, Win=8192 ( Negotiated scale factor 0x2 ) = 32768

192.168.1.8

192.168.1.7

TCP:Flags=...A...., SrcPort=53492, DstPort=9080, PayloadLen=0, Seq=793206767, Ack=4192041811, Win=64 (scale factor 0x8) = 16384

192.168.1.8

192.168.1.7

HTTP:Request, GET /xml/login, Query:un=paul&pw=password

192.168.1.7

192.168.1.8

HTTP:Response, HTTP/1.1, Status: Ok, URL: /xml/login

You will notice that in the Request frame, there is a username and password in plain text. (BTW, I changed the password to “password”). The fact that it’s an HTTP request tells me that this user name and password were probably not the Machine’s credentials as I assumed when I entered them onto application settings. Once again I looked at the Remote Potato settings and noticed it also had a security settings box. I reset this information on my phone and tried again and now I was able to connect.

Have My Potato and Watch It

Now I’m able to stream my Media Center recorded shows to my phone, which is pretty cool. If I’m on the road, in a plane, or somewhere remote I can schedule something to record, and then using Smart Potato, watch it immediately. Using Network Monitor to view data, and focusing on the IP address, will often give you some interesting information and is a useful way to spy on a device.

Open Source Freedom for Network Monitor Experts

$
0
0

We are excited to announce that we have moved 3 Network Monitor Expert projects and the Network Monitor SDK to the Outercurve Foundation. You can now contribute to:

  1. NMDecrypt – Decrypts SSL data, given the private key.
  2. NMTopUsers – Displays the top talkers on your network.
  3. NMTopProtocols – Displays protocol distribution.

Additionally we’ve moved the Network Monitor SDK which contains helper code for experts you create.

Why the Outercurve foundation?

The Outercurve Foundation, previously known as the CodePlex Foundation, defines and maintains a contribution framework and simple licensing terms. These projects will use the BSD license, a familiar open source license that many of you are comfortable with. The foundation allows you to contribute to these projects easily, after executing a simple DocuSign contract.

Get Involved

If you have some good ideas for improvement to our Network Monitor experts or ideas for new Experts, we’d love to have you join our project as a contributor. We’ve already heard from Bob Sledge of Beta-Signma Data Systems who has expressed interesting in making some useful updates to the NMTopUsers expert. He is now signed up through the Outercurve Foundation and is ready to help us out. To get involved you can contact us via the Outercurve page for our Network Monitor Experts Project. By contributing to these open source projects, we can improve and create tools to makes all of our lives easier and engage the power of the community and Open Source.


NMDecrypt Expert Updates - Version 2.3

$
0
0

When I first wrote about NMDecrypt Expert in this blog I mentioned some limitations. There have also been bugs reported since then. I decided I would fix some of these problems and address some of the limitations. My hope is to make this tool even more useful, but also to point out other ways community members could help extend the functionality further and get involved now that our Experts are open source. Below I will call some of these updates out explicitly.

Now You Can Select a IPv4/IPv6 Conversation

Previously you had to select the specific TCP conversation in order for the expert to work. This TCP conversation also had to contain the full TLS/SSL session setup, because the expert needs this data to do the decryption. However there are cases where a single IE session would spawn multiple TLS/SSL sessions, with only the first session containing the TLS/SSL session setup. Now, the expert can work in situations where multiple SSL sessions exist with two given restrictions:

  1. The trace can only contain traffic that is associated with the private key you’ve provided. If there is other traffic that happens simultaneously, the expert will fail. Community: It should be possible to compare the certificate path provided with the one used during negotiation. With this mapping you could potentially decrypt all traffic that uses the same certificate. Perhaps we could also just ignore the failures and continue on.
  2. Sessions can’t be intermingled. When the expert decrypts traffic, there’s a running check called a MAC (Message Authentication Code). State between separate TCP conversations is not maintained. So data from another TCP session messes up the MAC calculation and the expert fails. The work around for this problem is to manually save each TCP session in order to another file. This is simple using the Frame Buffer manager feature in the UI. Just select each TCP conversation, Ctrl+A to select all frames, and right click to open the Frame Buffer manager. Once you create a new file continue to add frames from each session in order to the same file, and then close the file once you are done. Community: While it might be hard to maintain state in the expert, I think it would be simple to make multiple passes though the capture file; limiting each pass to a single TCP conversation.

Support for Alternate Network Paths

The expert examines the traffic and derives a fully qualified path from the frame it’s examining. For instance if the path it sees is Ethernet.IPv4.IPv6.TCP, it uses this as a seed to understand the root for the TLS/SSL specific data. However, each of these is verified against known paths. In order to support new paths, I’ve added a few more common paths for wireless and ESP over IPv4/IPv6. Incidentally there was also a bug fix for IPv6 traffic that I fixed in this area. Community: If there are other network paths that we need to support, this is a very easy change. Also related to this is the support for TLS/SSL for other protocols. Currently we support LDAP and HTTP, but this would be easily extended to support other protocols.

TCP Fragmented Data

The Expert depended on the ability of Network Monitor parser to properly re-assemble TCP data. However, there are certain limitations where data does not align on a TCP frame boundary. This is because TCP streams data and therefore it’s possible for the data to break on non-frame bounds. However, from the TLS/SSL perspective, the data that describes each data segment is described by the protocol. So I modified the expert to find when there was not enough data to account for a full TLS/SSL Segment. In those cases, I reassembled that data manually. Community: There are bound to be problems between various versions of TLS/SSL. My main test case was TLS 1.0, so be aware for variances in other protocols that could cause issues.

Get Involved

The link to the latest version of the expert is available on our Expert Codeplex site. By highlighting some of the remaining issues I’m hoping to get more community involvement to further update this expert. And with the move to the OuterCurve Foundation contributing has never been easier.

NMTopUsers Expert: Community to the Rescue

$
0
0

We have some great new updates for the Top Users expert! But this time, I had nothing to do with it… Bob Sledge contacted me about extending and fixing the Top Users Expert. And so with these updates, we now have an improved Expert for Network Monitor.

What’s New with Top Users

The biggest update is that there is now only one version. You can switch between the Endpoint and Conversation mode from within the expert. You also have the ability to open new captures directly from the tool. Here’s a list of the other changes:

  • Conversation mode will show a percentage when viewing a single address type, like Ethernet or IPv4.
  • The sorting has been improved when changing address types.
  • There was a bug in the send/receive statistics that is now fixed.
  • Command line arguments are case insensitive now.
  • The DumpRow command line feature wasn’t working and is now fixed.
  • Data is now right justified properly.
  • Formatting for numeric data has been improved.
  • Address columns are auto sized correctly.
  • Status bar shows start/end frame times for entire trace.
  • Converted to VS2010

Bob’s Footsteps

To get Bob involved, we first had him sign a typical open source document electronically. This basically is to protect the Outercurve Foundation. After this we discussed on the phone the changes he was proposing because he wanted to make sure they were sound. We added Bob as a contributor to the CodePlex project, and the first step he made was to branch the project. Once his changes were made in the branch, I synced and verified that the changes made sense with regards to the original architecture. I also made some small modifications to the version and build process. Then we merged the project back and we were done.

Contributions to any of our projects can be as little as a feature request or bug report.  If you want to go further you can suggest code to fix or address a feature.  When you get more involved in a project there are opportunities like becoming a contributor or even coordinator.  Issue reports and discussions can be started for the project you are interested in. The list of projects is listed on the main CodePlex Project page. Also, if you have your own ideas for Experts you'd like to get started with in open source; we'd welcome you to join us as well. Just drop us a line by going to our Outercurve Project Page, and send email to the project lead.

With a Little Help from my Friends

It’s awesome to see contributions, like these from Bob Sledge, for a tool that is used by many. This is exactly the reason we wanted to make these experts open source. Perhaps this will inspire others to put their two cents in.

NMTopProtocols Expert Released

$
0
0

Michael A. Hawker is the Program Manager for Network Monitor. His focus has been on the API, UI, and Experts as they have been developed through versions 3.3 and 3.4.

You’ve seen a lot of updates lately on Experts with the move to the Outercurve Foundation, but we have a new expert for you too! It’s the brethren of Top Users: Top Protocols. Download it here over on CodePlex.

What is Top Protocols?

Top Protocols is another simple expert designed to give you a high-level summary of what’s occurring in a trace. While Top Users shows you the chattiest boxes on a network, likewise Top Protocols shows you the chattiest Protocols.

Once installed, it works like any other Expert in the Experts menu. Just find NMTopProtocols in the list and select Run Expert. The data will automatically start being parsed as indicated in the status bar:

clip_image002

What can it do?

Top Protocols walks through your trace and constructs a count of the highest-level protocols it encounters. It counts the number of times it sees each one, the structure for how that protocol was found in the stack and the number of bytes each frame of information contained. It can also reassemble the data as it goes so fragments get counted as the initiating protocol instead of just raw transport data.

One important note here is that Top Protocols only parses as far as it can using the selected Parsing Profile from Network Monitor. So, if you want the full breadth of parsers, you’ll want to make sure to be on the Windows profile first. If you just want a quick summary, change your profile first, and Top Protocols will load a little quicker.

After it’s collected this information, it displays it in three main views in the tabbed interface:

clip_image003

Note: you’ll need the MS Chart Controls pointed to in the download page in order to use the Pie Chart and Time Graph modes.

Overview

clip_image005

The overview is a basic table of the raw data collected. By default it’s sorted by the breakdown of the Protocol tree (which you can also see on the left), but you can use the columns at the top to sort by the different data points such as number of frames or number of bytes.

If after you’ve done some sorting you want to go back to the initial view, you can find the Restore Default Ordering option in the context menu of the grid:

clip_image006

If you select a node in the tree on the left, your view will be filtered to see only that protocol and its children:

clip_image008

Pie Chart View

The Pie Chart view is a recent addendum to the tool and is pretty simple at the moment. It gives you a quick and dirty way to visually see the most prevalent protocol in the trace:

clip_image010

Here we can see that HTTP traffic is the primary component to this trace.

This view also filters out the noise and only displays those elements which are end nodes of the tree.

Time Graph View

This view is the most exciting of all in Top Protocols. It’s almost like another expert in itself, and you get it for free!

clip_image012

The Time Graph view shows how the protocol traffic was received over the course of time the trace was taken. This lets you see spikes in protocol traffic to more easily determine when data was sent or received.

By default this mode filters out the protocols that don’t meet a certain threshold. This threshold can be configured in the options menu, as well as the scale of the graph.

Helpful Tricks

There are a lot of settings in Top Protocols, which can be found under the options menu.

clip_image013

The first option simply tells Top Protocols whether or not to reassemble the data first, thus any fragments encountered later are counted towards the tally of the initial protocol. Therefore, if you had a HTTP payload spanning multiple packets, each one would tally towards the total of HTTP traffic encountered. With this option off, you’d see only HTTP counts for the headers and the rest would be lumped under TCP. This could mask the intensity of certain types of traffic when turned off, but the comparison can tell you how fragmented your data is and if it was more requests with smaller payloads or less requests with larger payloads. A work item for the future could be to calculate both of these values at the same time and present that data as well.

The next set of options refers to the Time Graph and how it is calculated. The first option changes the number of intervals used to segment the data. The more slices used the greater the resolution to see changes but the more memory which will be required. You can also determine how much data needs to occur for a protocol to automatically be selected in the graph.

Use the “Show Tree as Hierarchy” option to decide if the data is grouped under their carrier protocols or not. And use the following three options to determine whether or not to use certain filters available from Network Monitor.

And lastly, you can decide how all these settings are persisted.

One thing to note with all these settings though, is that they’ll only take effect the next time a file is loaded. However, you can quickly reload your current view using the ‘Reload File’ option in the File menu:

clip_image014

Why’s this version 3.2 and what’s next?

Top Protocols has been around for a while, but started out like most of our experts as an internal project. Since then, we’ve started pushing Experts into the community and now as part of the Outercurve foundation. This makes it a lot easier to work on these projects for everyone’s benefit as Paul’s explained before.

Originally Top Protocols was written by Paul and ran on the command prompt. Michael took the project when the Experts feature was introduced and re-wrote the project to include more features and the GUI. It’s been through a couple of revisions since, but that’s why it’s already up to v3.2 when it was transferred to Outercurve and made available to everyone.

As for what’s next, I recommend you check out the Top Protocols homepage. We welcome people to submit issues and ideas and if you’re adventurous enough to hop on board in helping out. And we’ll see where we go from there together.

Microsoft Protocol Test Suites Available

$
0
0

We recently released a set of Microsoft Protocol Test Suites. OK it was a month ago, but we’ve been really busy…really! To access them you must have a Live ID and sign up. These Test Suites allow you to evaluate whether a protocol implementation meets certain interoperability requirements. They don’t cover every protocol requirement but can be a useful indication of interoperability. Plus all the source code is included so that you can extend them.

If you are involved in interoperating with Microsoft products, these Test Suites can provide some valuable information. Also be sure to check out all of our Interoperability Testing Links.

Lex Thomas Talks about Troubleshooting with Network Monitor

$
0
0

Lex Thomas is a Principal Technical Account Manager for the US Premier Support Services Team at Microsoft. He also provides Network Monitor training for premier accounts where he teaches the basics of network troubleshooting. In this three part video, Lex uses Network Monitor to troubleshoot potential Office 365 connectivity issues. There’s a lot you can learn by watching somebody use a tool, and this video contains tips and techniques which can provide you with new insight into troubleshooting issues with Network Monitor.

Enjoy!

Viewing all 40 articles
Browse latest View live


<script src="https://jsc.adskeeper.com/r/s/rssing.com.1596347.js" async> </script>