Splunk Cheat Sheet: Search and Query Commands

splunk cheat sheet

Whether you’re a cyber security professional, data scientist, or system administrator, when you mine large volumes of data for insights using Splunk, having a list of Splunk query commands at hand helps you focus on your work and solve problems faster than studying the official documentation.

This article is the convenient list you need. It provides several lists organized by the type of queries you would like to conduct on your data: basic pattern search on keywords, basic filtering using regular expressions, mathematical computations, and statistical and graphing functionalities.

The following Splunk cheat sheet assumes you have Splunk installed. It is a refresher on useful Splunk query commands. Download a PDF of this Splunk cheat sheet here.

Brief Introduction of Splunk

The Internet of Things (IoT) and Internet of Bodies (IoB) generate much data, and searching for a needle of datum in such a haystack can be daunting.

Splunk is a Big Data mining tool. With Splunk, not only is it easier for users to excavate and analyze machine-generated data, but it also visualizes and creates reports on such data.

Splunk Enterprise search results on sample data

Splunk contains three processing components:

  • The Indexer parses and indexes data added to Splunk.
  • The Forwarder (optional) sends data from a source.
  • The Search Head is for searching, analyzing, visualizing, and summarizing your data.
Splunk Processing Components

Search Language in Splunk

Splunk uses what’s called Search Processing Language (SPL), which consists of keywords, quoted phrases, Boolean expressions, wildcards (*), parameter/value pairs, and comparison expressions. Unless you’re joining two explicit Boolean expressions, omit the AND operator because Splunk assumes the space between any two search terms to be AND.

Basic Search offers a shorthand for simple keyword searches in a body of indexed data myIndex without further processing:

index=myIndex keyword

An event is an entry of data representing a set of values associated with a timestamp. It can be a text document, configuration file, or entire stack trace. Here is an example of an event in a web activity log:

[10/Aug/2022:18:23:46] userID=176 country=US paymentID=30495

Search commands help filter unwanted events, extract additional information, calculate values, transform data, and statistically analyze the indexed data. It is a process of narrowing the data down to your focus. Note the decreasing number of results below:

Finding entries without IPv4 address on sample data
Finding entries without IPv4 address on sample data

Common Search Commands

chart, timechartReturns results in a tabular output for (time-series) charting
dedup XRemoves duplicate results on a field X
evalCalculates an expression (see Calculations)
fieldsRemoves fields from search results
head/tail NReturns the first/last N results, where N is a positive integer
lookupAdds field values from an external source
renameRenames a field. Use wildcards (*) to specify multiple fields.
rexExtract fields according to specified regular expression(s)
searchFilters results to those that match the search expression
sort XSorts the search results by the specified fields X
statsProvides statistics, grouped optionally by fields
mstatsSimilar to stats but used on metrics instead of events
tableDisplays data fields in table format.
top/rareDisplays the most/least common values of a field
transactionGroups search results into transactions
whereFilters search results using eval expressions. For comparing two different fields.

SPL Syntax

Begin by specifying the data using the parameter index, the equal sign =, and the data index of your choice: index=index_of_choice.

Complex queries involve the pipe character |, which feeds the output of the previous query into the next.

This is the shorthand query to find the word hacker in an index called cybersecurity:

index=cybersecurity hacker

SPL search termsDescription
Full Text Search
CybersecurityFind the word “Cybersecurity” irrespective of capitalization
White Black HatFind those three words in any order irrespective of capitalization
"White Black+Hat"Find the exact phrase with the given special characters, irrespective of capitalization
Filter by fields
source="/var/log/myapp/access.log" status=404All lines where the field status has value 404 from the file /var/log/myapp/access.log
source="bigdata.rar:*" index="data_tutorial" Code=REDAll entries where the field Code has value RED in the archive bigdata.rar indexed as data_tutorial
index="customer_feedback" _raw="*excellent*"All entries whose text contains the keyword “excellent” in the indexed data set customer_feedback
Filter by host
host="myblog" source="/var/log/syslog" FatalShow all Fatal entries from /var/log/syslog belonging to the blog host myblog
Selecting an index
index="myIndex" passwordAccess the index called myIndex and text matching password.
source="test_data.zip:*"Access the data archive called test_data.zip and parse all its entries (*).
sourcetype="datasource01"(Optional) Search data sources whose type is datasource01.

This syntax also applies to the arguments following the search keyword. Here is an example of a longer SPL search string:

index=* OR index=_* sourcetype=generic_logs | search Cybersecurity | head 10000

In this example, index=* OR index=_* sourcetype=generic_logs is the data body on which Splunk performs search Cybersecurity, and then head 10000 causes Splunk to show only the first (up to) 10,000 entries.

Basic Filtering

You can filter your data using regular expressions and the Splunk keywords rex and regex. An example of finding deprecation warnings in the logs of an app would be:

index="app_logs" | regex error="Deprecation Warning"

SPL filtersDescriptionExamples
searchFind keywords and/or fields with given valuesindex=names | search Chris
index=emails | search
regexFind expressions matching a given regular expressionFind logs not containing IPv4 addresses:
index=syslogs | regex
rexExtract fields according to specified regular expression(s) into a new field for further processingExtract email addresses:
source="email_dump.txt" | rex
field=_raw "From:
<(?<from>.*)> To: <(?<to>.*)>"

The biggest difference between search and regex is that you can only exclude query strings with regex. These two are equivalent:

  • source="access.log" Fatal
  • source="access.log" | regex _raw=".*Fatal.*"

But you can only use regex to find events that do not include your desired search term:

  • source="access.log" | regex _raw!=".*Fatal.*"

The Splunk keyword rex helps determine the alphabetical codes involved in this dataset:

Alphabetical codes in sample data
Alphabetical codes in sample data

Splunk Command Generator

Say goodbye to the hassle of trying to remember the exact syntax for your Splunk commands! With our Splunk Command Generator, you can simply say what you need Splunk to do, and we will generate the command for you.


Combine the following with eval to do computations on your data, such as finding the mean, longest and shortest comments in the following example:

index=comments | eval cmt_len=len(comment) | stats

avg(cmt_len), max(cmt_len), min(cmt_len) by index

FunctionReturn value / ActionUsage:eval foo=…
abs(X)absolute value of Xabs(number)
case(X,"Y",…)Takes pairs of arguments X and Y, where X arguments are Boolean expressions. When evaluated to TRUE, the arguments return the corresponding Y argumentcase(id == 0, "Amy", id == 1,"Brad", id == 2, "Chris")
ceil(X)Ceiling of a number Xceil(1.9)
cidrmatch("X",Y)Identifies IP addresses that belong to a particular subnetcidrmatch("",ip)
coalesce(X,…)The first value that is not NULLcoalesce(null(), "Returned val", null())
cos(X)Cosine of Xn=cos(60) #1/2
exact(X)Evaluates an expression X using double precision floating point arithmeticexact(3.14*num)
exp(X)e (natural number) to the power X (eX)exp(3)
if(X,Y,Z)If X evaluates to TRUE, the result is the second argument Y. If X evaluates to FALSE, the result evaluates to the third argument Zif(error==200, "OK", "Error") 
in(field,valuelist)TRUE if a value in valuelist matches a value in field. You must use the in() function embedded inside the if() functionif(in(status, "404","500","503"),"true","false")
isbool(X)TRUE if X is Booleanisbool(field)
isint(X)TRUE if X is an integerisint(field)
isnull(X)TRUE if X is NULLisnull(field)
isstr(X)TRUE if X is a stringisstr(field)
len(X)Character length of string Xlen(field)
like(X,"Y")TRUE if and only if X is like the SQLite pattern in Ylike(field, "addr%")
log(X,Y)Logarithm of the first argument X where the second argument Y is the base. Y defaults to 10 (base-10 logarithm)log(number,2)
lower(X)Lowercase of string Xlower(username)
ltrim(X,Y)X with the characters in Y trimmed from the left side. Y defaults to spaces and tabsltrim(" ZZZabcZZ ", " Z")
match(X,Y)TRUE if X matches the regular expression pattern Ymatch(field, "^\d{1,3}\.\d$")
max(X,…)The maximum value in a series of data X,…max(delay, mydelay)
md5(X)MD5 hash of a string value Xmd5(field)
min(X,…)The minimum value in a series of data X,…min(delay, mydelay)
mvcount(X)Number of values of Xmvcount(multifield)
mvfilter(X)Filters a multi-valued field based on the Boolean expression Xmvfilter(match(email, "net$"))
mvindex(X,Y,Z)Returns a subset of the multi-valued field X from start position (zero-based) Y to Z (optional)mvindex(multifield, 2)
mvjoin(X,Y)Joins the individual values of a multi-valued field X using string delimiter Ymvjoin(address, ";")
now()Current time as Unix timestampnow()
null()NULL value. This function takes no arguments.null()
nullif(X,Y)X if the two arguments, fields X and Y, are different. Otherwise returns NULL.nullif(fieldX, fieldY)
random()Pseudo-random number ranging from 0 to 2147483647random()
relative_time (X,Y)Unix timestamp value of relative time specifier Y applied to Unix timestamp Xrelative_time(now(),"-1d@d")
replace(X,Y,Z)A string formed by substituting string Z for every occurrence of regex string Y in string X
The example swaps the month and day numbers of a date.
replace(date, "^(\d{1,2})/(\d{1,2})/", "\2/\1/")
round(X,Y)X rounded to the number of decimal places specified by Y, or to an integer for omitted Yround(3.5)
rtrim(X,Y)X with the characters in (optional) Y trimmed from the right side. Trim spaces and tabs for unspecified Yrtrim(" ZZZZabcZZ ", " Z")
split(X,"Y")X as a multi-valued field, split by delimiter Ysplit(address, ";")
sqrt(X)Square root of Xsqrt(9) # 3
strftime(X,Y)Unix timestamp value X rendered using the format specified by Ystrftime(time, "%H:%M")
strptime(X,Y)Value of Unix timestamp X as a string parsed from format Ystrptime(timeStr, "%H:%M")
substr(X,Y,Z)Substring of X from start position (1-based) Y for (optional) Z characterssubstr("string", 1, 3) #str
time()Current time to the microsecond.time()
tonumber(X,Y)Converts input string X to a number of numerical base Y (optional, defaults to 10)tonumber("FF",16)
tostring(X,Y)Field value of X as a string.
If X is a number, it reformats it as a string. If X is a Boolean value, it reformats to “True” or “False” strings.
If X is a number, the optional second argument Y is one of:”hex”: convert X to hexadecimal,”commas”: formats X with commas and two decimal places, or”duration”: converts seconds X to readable time format HH:MM:SS.
This example returns bar=00:08:20:
| makeresults | eval bar = tostring(500, "duration")
typeof(X)String representation of the field typeThis example returns  "NumberBool":
| makeresults | eval n=typeof(12) + typeof(1==2)
urldecode(X)URL X, decoded.urldecode("http%3A%2F%2Fwww.site.com%2Fview%3Fr%3Dabout")
validate(X,Y,…)For pairs of Boolean expressions X and strings Y, returns the string Y corresponding to the first expression X which evaluates to False, and defaults to NULL if all X are True.validate(isint(N), "Not an integer", N>0, "Not positive")

Statistical and Graphing Functions

Common statistical functions used with the chart, stats, and timechart commands. Field names can contain wildcards (*), so avg(*delay) might calculate the average of the delay and *delay fields.

FunctionReturn value
Usage: stats foo=… / chart bar=… / timechart t=…
avg(X)average of the values of field X
count(X)number of occurrences of the field X. To indicate a specific field value to match, format X as eval(field="desired_value").
dc(X)count of distinct values of the field X
earliest(X)latest(X)chronologically earliest/latest seen value of X
max(X)maximum value of the field X. For non-numeric values of X, compute the max using alphabetical ordering.
median(X)middle-most value of the field X
min(X)minimum value of the field X. For non-numeric values of X, compute the min using alphabetical ordering. 
mode(X)most frequent value of the field X
percN(Y)N-th percentile value of the field Y. N is a non-negative integer < 100.Example: perc50(total) = 50th percentile value of the field total.
range(X)difference between the max and min values of the field X
stdev(X)sample standard deviation of the field X
stdevp(X)population standard deviation of the field X
sum(X)sum of the values of the field X
sumsq(X)sum of the squares of the values of the field X
values(X)list of all distinct values of the field X as a multi-value entry. The order of the values is alphabetical
var(X)sample variance of the field X

Index Statistics

Compute index-related statistics.

From this point onward, splunk refers to the partial or full path of the Splunk app on your device $SPLUNK_HOME/bin/splunk, such as /Applications/Splunk/bin/splunk on macOS, or, if you have performed cd and entered /Applications/Splunk/bin/, simply ./splunk.

| eventcount summarize=false index=* | dedup index | fields indexList all indexes on your Splunk instance. On the command line, use this instead:
splunk list index
| eventcount summarize=false report_size=true index=* | eval size_MB = round(size_bytes/1024/1024,2)Show the number of events in your indexes and their sizes in MB and bytes
| REST /services/data/indexes | table title currentDBSizeMBList the titles and current database sizes in MB of the indexes on your Indexers
index=_internal source=*metrics.log group=per_index_thruput series=* | eval MB = round(kb/1024,2) | timechart sum(MB) as MB by seriesQuery write amount in MB per index from metrics.log
index=_internal metrics kb series!=_* "group=per_host_thruput" | timechart fixedrange=t span=1d sum(kb) by seriesQuery write amount in KB per day per Indexer by each host
index=_internal metrics kb series!=_* "group=per_index_thruput" | timechart fixedrange=t span=1d sum(kb) by seriesQuery write amount in KB per day per Indexer by each index

Reload apps

To reload Splunk, enter the following in the address bar or command line interface.

Address barDescription
http://localhost:8000/debug/refreshReload Splunk. Replace localhost:8000 with the base URL of your Splunk Web server if you’re not running it on your local machine.
Command lineDescription
splunk _internal call /data/inputs/monitor/_reloadReload Splunk file input configuration
splunk stop
splunk enable webserver
splunk start
These three lines in succession restart Splunk.

Debug Traces

You can enable traces listed in $SPLUNK_HOME/var/log/splunk/splunkd.log.

To change trace topics permanently, go to $SPLUNK_HOME/bin/splunk/etc/log.cfg and change the trace level, for example, from INFO to DEBUG: category.TcpInputProc=DEBUG


08-10-2022 05:20:18.653 -0400 INFO  ServerConfig [0 MainThread] - Will generate GUID, as none found on this server.


08-10-2022 05:20:18.653 -0400 DEBUG  ServerConfig [0 MainThread] - Will generate GUID, as none found on this server.

To change the trace settings only for the current instance of Splunk, go to Settings > Server Settings > Server Logging:

Filter the log channels as above.

Select your new log trace topic and click Save. This persists until you stop the server.


The following changes Splunk settings. Where necessary, append -auth user:pass to the end of your command to authenticate with your Splunk web server credentials.

Command lineDescription
splunk btool inputs listList Splunk configurations
splunk btool checkCheck Splunk configuration syntax
Input management
splunk _internal call /data/inputs/tcp/rawList TCP inputs
splunk _internal call /data/inputs/tcp/raw -get:search sourcetype=fooRestrict listing of TCP inputs to only those with a source type of foo
License details of your current Splunk instance
splunk list licensesShow your current license
User management
splunk _internal call /authentication/providers/services/_reloadReload authentication configurations for Splunk 6.x
splunk _internal call /services/authentication/users -get:search adminSearch for all users who are admins
splunk _internal call /services/authentication/users -get:search indexes_editSee which users could edit indexes
splunk _internal call /services/authentication/users/helpdesk -method DELETEUse the remove link in the returned XML output to delete the user  helpdesk

Capacity Planning

Importing large volumes of data takes much time. If you’re using Splunk in-house, the software installation of Splunk Enterprise alone requires ~2GB of disk space. You can find an excellent online calculator at splunk-sizing.appspot.com.

The essential factors to consider are:

  • Input data
    • Specify the amount of data concerned. The more data you send to Splunk Enterprise, the more time Splunk needs to index it into results that you can search, report and generate alerts on.
  • Data Retention
    • Specify how long you want to keep the data. You can only keep your imported data for a maximum length of 90 days or approximately three months.
    • Hot/Warm: short-term, in days.
    • Cold: mid-term, in weeks.
    • Archived (Frozen): long-term, in months.
  • Architecture
    • Specify the number of nodes required. The more data to ingest, the greater the number of nodes required. Adding more nodes will improve indexing throughput and search performance.
  • Storage Required
    • Specify how much space you need for hot/warm, cold, and archived data storage.
  • Storage Configuration
    • Specify the location of the storage configuration. If possible, spread each type of data across separate volumes to improve performance: hot/warm data on the fastest disk, cold data on a slower disk, and archived data on the slowest.

We hope this Splunk cheat sheet makes Splunk a more enjoyable experience for you. To download a PDF version of this Splunk cheat sheet, click here.

Frequently Asked Questions

  • Cassandra Lee

    Cassandra is a writer, artist, musician, and technologist who makes connections across disciplines: cyber security, writing/journalism, art/design, music, mathematics, technology, education, psychology, and more. She's been a vocal advocate for girls and women in STEM since the 2010s, having written for Huffington Post, International Mathematical Olympiad 2016, and Ada Lovelace Day, and she's honored to join StationX. You can find Cassandra on LinkedIn and Linktree.