![]() You’ll want to place your streaming commands before your non-streaming commands. Using indexed fields, fields extracted at search time and an appropriate time range you can quickly cut out the chaff 2. Using these filters now means that the commands that come later do not needlessly process events that don’t contain what we’re looking for. What fields you end up using are, of course, dependent on what type of data you’re working with, but common candidates to start off are fields that identify the event type, format, log level, component, etc. Although they will not perform as fast as when filtering with default fields, they are the next tools in line. This is why it’s a good idea to logically separate data via indexes that will end up serving different requirements.įinally, we have the fields extracted at search time. Hopefully, you’ve set values to the default fields that aid you in executing a dense search versus ending up with a “needle in a haystack” scenario. As you’ve probably seen, it’s common practice to start your Splunk query by specifying index and sourcetype at the very least. ![]() Since these fields are created as part of the metadata generated when the data is indexed, they are readily available to be used as filters without having to extract the fields first. Extra points if you’re already familiar with the “earliest”, “latest” and relative time modifiers.Īnother powerful tool is the default fields (host, index, source, sourcetype, etc.). ![]() When you reduce the time range you’re allowing Splunk to quickly discard irrelevant chunks of data right out of the gate. Splunk will know what data buckets to look at based on what your query’s time range tells it. Let’s start with the obvious: the time range picker. The lowest hanging fruit in this tree is making sure you only retrieve what you will use – anything more and you’re wasting resources. Slice and dice your data as early as possible That’s when I gave query efficiency a serious look and these are the lessons learned since. What to do, what to do? Before telling the customer they’re going to need to level up on their hardware game I better make sure I’m squeezing every last drop of performance that the Splunk environment has to give. Coolly, I worked my usual SPL magic … but the environment was not having it and neither were the users – my Splunk query was slow, and results sputtered onto the screen many minutes later. That day when what was big data is turned into huge data by stepping from GB’s to TB’s indexed. You want what? Yes, I can do that in a second! Everyone is happy and it seems like the pride and praise will only grow… until I came upon that day. ![]() Yes, it went a bit to my head and I got to thinking I’m the big data big shot – mining value out of every piece of data I came across in a jiffy. I think it happens to every Splunker as it happened to me. Practice good habits when writing Splunk queries and keep your Splunk searches as efficient as possible. SP6’s Quick-Tips for Splunk Query Optimization ![]()
0 Comments
Leave a Reply. |
AuthorWrite something about yourself. No need to be fancy, just an overview. ArchivesCategories |