Whether you’re a cyber security professional, data scientist, or system administrator, when you mine large volumes of data for insights using Splunk, having a list of Splunk query commands at hand helps you focus on your work and solve problems faster than studying the official documentation.
This article is the convenient list you need. It provides several lists organized by the type of queries you would like to conduct on your data: basic pattern search on keywords, basic filtering using regular expressions, mathematical computations, and statistical and graphing functionalities.
The following Splunk cheat sheet assumes you have Splunk installed. It is a refresher on useful Splunk query commands. Download a PDF of this Splunk cheat sheet here.
Splunk Cheat Sheet Search
Search our Splunk cheat sheet to find the right cheat for the term you're looking for. Simply enter the term in the search bar and you'll receive the matching cheats available.
Brief Introduction of Splunk
The Internet of Things (IoT) and Internet of Bodies (IoB) generate much data, and searching for a needle of datum in such a haystack can be daunting.
Splunk is a Big Data mining tool. With Splunk, not only is it easier for users to excavate and analyze machine-generated data, but it also visualizes and creates reports on such data.
Splunk Enterprise search results on sample data
Splunk contains three processing components:
- The Indexer parses and indexes data added to Splunk.
- The Forwarder (optional) sends data from a source.
- The Search Head is for searching, analyzing, visualizing, and summarizing your data.
Search Language in Splunk
Splunk uses what’s called Search Processing Language (SPL), which consists of keywords, quoted phrases, Boolean expressions, wildcards (*), parameter/value pairs, and comparison expressions. Unless you’re joining two explicit Boolean expressions, omit the
AND operator because Splunk assumes the space between any two search terms to be
Basic Search offers a shorthand for simple keyword searches in a body of indexed data
myIndex without further processing:
An event is an entry of data representing a set of values associated with a timestamp. It can be a text document, configuration file, or entire stack trace. Here is an example of an event in a web activity log:
[10/Aug/2022:18:23:46] userID=176 country=US paymentID=30495
Search commands help filter unwanted events, extract additional information, calculate values, transform data, and statistically analyze the indexed data. It is a process of narrowing the data down to your focus. Note the decreasing number of results below:
Common Search Commands
|Returns results in a tabular output for (time-series) charting|
|Removes duplicate results on a field X|
|Calculates an expression (see Calculations)|
|Removes fields from search results|
|Returns the first/last N results, where N is a positive integer|
|Adds field values from an external source|
|Renames a field. Use wildcards (*) to specify multiple fields.|
|Extract fields according to specified regular expression(s)|
|Filters results to those that match the search expression|
|Sorts the search results by the specified fields X|
|Provides statistics, grouped optionally by fields|
|Similar to stats but used on metrics instead of events|
|Displays data fields in table format.|
|Displays the most/least common values of a field|
|Groups search results into transactions|
|Filters search results using eval expressions. For comparing two different fields.|
Begin by specifying the data using the parameter
index, the equal sign
=, and the data index of your choice:
Complex queries involve the pipe character |, which feeds the output of the previous query into the next.
This is the shorthand query to find the word hacker in an index called cybersecurity:
|SPL search terms||Description|
|Full Text Search|
|Find the word “Cybersecurity” irrespective of capitalization|
|Find those three words in any order irrespective of capitalization|
|Find the exact phrase with the given special characters, irrespective of capitalization|
|Filter by fields|
|All lines where the field status has value |
|All entries where the field Code has value RED in the archive bigdata.rar indexed as |
|All entries whose text contains the keyword “excellent” in the indexed data set |
|Filter by host|
|Show all |
|Selecting an index|
|Access the index called |
|Access the data archive called |
|(Optional) Search data sources whose type is |
This syntax also applies to the arguments following the search keyword. Here is an example of a longer SPL search string:
index=* OR index=_* sourcetype=generic_logs | search Cybersecurity | head 10000
In this example,
index=* OR index=_* sourcetype=generic_logs is the data body on which Splunk performs
search Cybersecurity, and then
head 10000 causes Splunk to show only the first (up to) 10,000 entries.
You can filter your data using regular expressions and the Splunk keywords
regex. An example of finding deprecation warnings in the logs of an app would be:
index="app_logs" | regex error="Deprecation Warning"
|Find keywords and/or fields with given values||• |
|Find expressions matching a given regular expression||Find logs not containing IPv4 addresses:|
|Extract fields according to specified regular expression(s) into a new field for further processing||Extract email addresses:|
The biggest difference between search and regex is that you can only exclude query strings with
regex. These two are equivalent:
source="access.log" | regex _raw=".*Fatal.*"
But you can only use regex to find events that do not include your desired search term:
source="access.log" | regex _raw!=".*Fatal.*"
The Splunk keyword rex helps determine the alphabetical codes involved in this dataset:
Splunk Command Generator
Say goodbye to the hassle of trying to remember the exact syntax for your Splunk commands! With our Splunk Command Generator, you can simply say what you need Splunk to do, and we will generate the command for you.
Combine the following with eval to do computations on your data, such as finding the mean, longest and shortest comments in the following example:
index=comments | eval cmt_len=len(comment) | stats
avg(cmt_len), max(cmt_len), min(cmt_len) by index
|Function||Return value / Action||Usage:eval foo=…|
|absolute value of X|
|Takes pairs of arguments X and Y, where X arguments are Boolean expressions. When evaluated to TRUE, the arguments return the corresponding Y argument|
|Ceiling of a number X|
|Identifies IP addresses that belong to a particular subnet|
|The first value that is not NULL|
|Cosine of X|
|Evaluates an expression X using double precision floating point arithmetic|
|e (natural number) to the power X (eX)|
|If X evaluates to TRUE, the result is the second argument Y. If X evaluates to FALSE, the result evaluates to the third argument Z|
|TRUE if a value in valuelist matches a value in field. You must use the in() function embedded inside the if() function|
|TRUE if X is Boolean|
|TRUE if X is an integer|
|TRUE if X is NULL|
|TRUE if X is a string|
|Character length of string X|
|TRUE if and only if X is like the SQLite pattern in Y|
|Logarithm of the first argument X where the second argument Y is the base. Y defaults to 10 (base-10 logarithm)|
|Lowercase of string X|
|X with the characters in Y trimmed from the left side. Y defaults to spaces and tabs|
|TRUE if X matches the regular expression pattern Y|
|The maximum value in a series of data X,…|
|MD5 hash of a string value X|
|The minimum value in a series of data X,…|
|Number of values of X|
|Filters a multi-valued field based on the Boolean expression X|
|Returns a subset of the multi-valued field X from start position (zero-based) Y to Z (optional)|
|Joins the individual values of a multi-valued field X using string delimiter Y|
|Current time as Unix timestamp|
|NULL value. This function takes no arguments.|
|X if the two arguments, fields X and Y, are different. Otherwise returns NULL.|
|Pseudo-random number ranging from 0 to 2147483647|
|Unix timestamp value of relative time specifier Y applied to Unix timestamp X|
|A string formed by substituting string Z for every occurrence of regex string Y in string X|
The example swaps the month and day numbers of a date.
|X rounded to the number of decimal places specified by Y, or to an integer for omitted Y|
|X with the characters in (optional) Y trimmed from the right side. Trim spaces and tabs for unspecified Y|
|X as a multi-valued field, split by delimiter Y|
|Square root of X|
|Unix timestamp value X rendered using the format specified by Y|
|Value of Unix timestamp X as a string parsed from format Y|
|Substring of X from start position (1-based) Y for (optional) Z characters|
|Current time to the microsecond.|
|Converts input string X to a number of numerical base Y (optional, defaults to 10)|
|Field value of X as a string.|
If X is a number, it reformats it as a string. If X is a Boolean value, it reformats to "True" or "False" strings.
If X is a number, the optional second argument Y is one of:"hex": convert X to hexadecimal,"commas": formats X with commas and two decimal places, or"duration": converts seconds X to readable time format HH:MM:SS.
|String representation of the field type|
|URL X, decoded.|
|For pairs of Boolean expressions X and strings Y, returns the string Y corresponding to the first expression X which evaluates to False, and defaults to NULL if all X are True.|
Statistical and Graphing Functions
Common statistical functions used with the
timechart commands. Field names can contain wildcards
avg(*delay) might calculate the average of the
|average of the values of field X|
|number of occurrences of the field X. To indicate a specific field value to match, format X as |
|count of distinct values of the field X|
|chronologically earliest/latest seen value of X|
|maximum value of the field X. For non-numeric values of X, compute the max using alphabetical ordering.|
|middle-most value of the field X|
|minimum value of the field X. For non-numeric values of X, compute the min using alphabetical ordering.|
|most frequent value of the field X|
|N-th percentile value of the field Y. N is a non-negative integer < 100.Example: |
|difference between the max and min values of the field X|
|sample standard deviation of the field X|
|population standard deviation of the field X|
|sum of the values of the field X|
|sum of the squares of the values of the field X|
|list of all distinct values of the field X as a multi-value entry. The order of the values is alphabetical|
|sample variance of the field X|
Compute index-related statistics.
From this point onward,
splunk refers to the partial or full path of the Splunk app on your device
$SPLUNK_HOME/bin/splunk, such as
/Applications/Splunk/bin/splunk on macOS, or, if you have performed cd and entered
/Applications/Splunk/bin/, simply .
|List all indexes on your Splunk instance. On the command line, use this instead:|
|Show the number of events in your indexes and their sizes in MB and bytes|
|List the titles and current database sizes in MB of the indexes on your Indexers|
|Query write amount in MB per index from |
|Query write amount in KB per day per Indexer by each host|
|Query write amount in KB per day per Indexer by each index|
To reload Splunk, enter the following in the address bar or command line interface.
|Reload Splunk. Replace|
|Reload Splunk file input configuration|
|These three lines in succession restart Splunk.|
You can enable traces listed in
To change trace topics permanently, go to
$SPLUNK_HOME/bin/splunk/etc/log.cfg and change the trace level, for example, from INFO to DEBUG:
08-10-2022 05:20:18.653 -0400 INFO ServerConfig [0 MainThread] - Will generate GUID, as none found on this server.
08-10-2022 05:20:18.653 -0400 DEBUG ServerConfig [0 MainThread] - Will generate GUID, as none found on this server.
To change the trace settings only for the current instance of Splunk, go to Settings > Server Settings > Server Logging:
Filter the log channels as above.
Select your new log trace topic and click Save. This persists until you stop the server.
The following changes Splunk settings. Where necessary, append
-auth user:pass to the end of your command to authenticate with your Splunk web server credentials.
|List Splunk configurations|
|Check Splunk configuration syntax|
|List TCP inputs|
|Restrict listing of TCP inputs to only those with a source type of |
|License details of your current Splunk instance|
|Show your current license|
|Reload authentication configurations for Splunk 6.x|
|Search for all users who are admins|
|See which users could edit indexes|
|Use the remove link in the returned XML output to delete the user |
Importing large volumes of data takes much time. If you’re using Splunk in-house, the software installation of Splunk Enterprise alone requires ~2GB of disk space. You can find an excellent online calculator at splunk-sizing.appspot.com.
The essential factors to consider are:
- Input data
- Specify the amount of data concerned. The more data you send to Splunk Enterprise, the more time Splunk needs to index it into results that you can search, report and generate alerts on.
- Data Retention
- Specify how long you want to keep the data. You can only keep your imported data for a maximum length of 90 days or approximately three months.
- Hot/Warm: short-term, in days.
- Cold: mid-term, in weeks.
- Archived (Frozen): long-term, in months.
- Specify the number of nodes required. The more data to ingest, the greater the number of nodes required. Adding more nodes will improve indexing throughput and search performance.
- Storage Required
- Specify how much space you need for hot/warm, cold, and archived data storage.
- Storage Configuration
- Specify the location of the storage configuration. If possible, spread each type of data across separate volumes to improve performance: hot/warm data on the fastest disk, cold data on a slower disk, and archived data on the slowest.
We hope this Splunk cheat sheet makes Splunk a more enjoyable experience for you. To download a PDF version of this Splunk cheat sheet, click here.