1 2 3 |
`notable` | stats latest(lastTime) as LastTimeSeen values(rule_name) as "Rule Name" values(comment) as "Historical Analysis" values(user) as User by _time event_id, urgency | eval LastTimeSeen=strftime(LastTimeSeen,"%+") |
exploremydata – data explorer
This dashboard provides and overview of the data that is available to query. Click on the index below to review source types in that index, and then a sourcetype to review fields. Finally, you can click on a field to see sample values in that field. Click “Show Filters” above to open a search window […]
Character Count Per Event
Here’s an incredibly simple Splunk query to count the number of characters in an event:
1 |
index=* | eval CharCount=len(_raw) |
Breathing Fire Dragon when Starting dbx_task_server
1 |
index=_internal sourcetype=dbx_server Starting dbx_task_server |
Will return events that display a little dragon ascii art:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 |
|\___/| (,\ /,)\ / / \ (@_^_@)/ \ W//W_/ \ (//) | \ (/ /) _|_ / ) \ (// /) '/,_ _ _/ (~^-. (( // )) ,-{ _ `. (( /// )) '/\ / | (( ///)) `. { } ((/ )) .----~-.\ \-' ///.----..> \ ///-._ _ _ _} |
Multiple Malware Detections on a Single Host
This is a simple enough query for detecting a host with multiple infections/detections. The reason for the bucket and incorporating a search over a longer time span (say 60m) is I found it to provide better results and less false negatives if the infrastructure isn’t setup to ingest data in near real-time.
1 |
index=malware category="something_high_fidelity" | bucket _time span=15m | stats count by dest | where count>=3 |
Baselining Dashboard
This is better and more flexible option then timewrap in my opinion. Performance ain’t too shabby either.
1 |
index=foo earliest=-1d latest=now | timechart span=10m count as Current | appendcols [search index=foo earliest=-1mon-1d latest=-mon | timechart span=10m count as "-1 Month"] appendcols [search index=foo earliest=-1w-1d latest=-w | timechart span=10m count as "-1 Week"] |
Disk Usage per Index by Indexer
Summary: Instead of grabbing data from all time, using the dbinspect command will allow administrators to quickly determine how big an index is. There are additional fields in the dbinspect, so explore that to gain other data pivots.
1 |
|dbinspect index=_internal | stats sum(sizeOnDiskMB) by splunk_server |
Exclude single event type from logs
Do this on HF transforms.conf:
1 2 3 4 5 6 |
[discard_gotoips] REGEX = <<<use regex,URL>>> DEST_KEY = queue FORMAT = nullQueue |
props.conf:
1 2 3 4 5 |
[default] TRANSFORMS-null = discard_gotoips File location: /etc/system/local |
Find unused dashboards
Use this search to find unused dashboards:
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 |
| rest /servicesNS/-/-/data/ui/views splunk_server=* | search isDashboard=1 | rename eai:acl.app as app | fields title app | join type=left title [| search index=_internal sourcetype=splunk_web_access host=* user=* | rex field=uri_path ".*/(?<title>[^/]*)$" | stats latest(_time) as Time latest(user) as user by title ] | where isnotnull(Time) | eval Now=now() | eval "Days since last accessed"=round((Now-Time)/86400,2) | sort - "Days since last accessed" | convert ctime(Time) | fields - Now |
Admin Notes – Fantastic query! I modified the SPL slightly as I had an issue when I copied it to my two test environments.
List all ES Correlation Searches
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 |
| rest splunk_server=local count=0 /services/saved/searches | where match('action.correlationsearch.enabled', "1|[Tt]|[Tt][Rr][Uu][Ee]") | rex field=action.customsearchbuilder.spec "datamodel\\\":\s+\\\"(?<Data_Model>\w+)" | rex field=action.customsearchbuilder.spec "object\\\":\s+\\\"(?<Dataset>\w+)" | rename action.correlationsearch.label as Search_Name title as Rule_Name eai:acl.app as Application_Context request.ui_dispatch_app as UI_Dispatch_Context description as Description Data_Model as Guided_Mode:Data_Model Dataset as Guided_Mode:Dataset action.customsearchbuilder.enabled as Guided_Mode action.customsearchbuilder.spec as Guided_Mode:Search_Logic search as Search dispatch.earliest_time as Earliest_Time dispatch.latest_time as Latest_Time cron_schedule as Cron_Schedule schedule_window as Schedule_Window schedule_priority as Schedule_Priority alert_type as Trigger_Conditions:Trigger_Alert_When alert_comparator as Trigger_Conditions:Alert_Comparator alert_threshold as Trigger_Conditions:Alert_Threshold alert.suppress.period as Throttling:Window_Duration alert.suppress.fields as Throttling:Fields_To_Group_By action.notable.param.rule_title as Notable:Title action.notable.param.rule_description as Notable:Description action.notable.param.security_domain as Notable:Security_Domain action.notable.param.severity as Notable:Severity action.notable.param.default_owner as Notable:Default_Owner action.notable.param.default_status as Notable:Default_Status action.notable.param.drilldown_name as Notable:Drill-down_Name action.notable.param.drilldown_search as Notable:Drill-down_Search action.notable.param.drilldown_earliest_offset as Notable:Drill-down_Earliest_Offset action.notable.param.drilldown_latest_offset as Notable:drill-down_Latest_Offset action.notable.param.next_steps as Notable:Next_Steps action.risk.param._risk_score as Risk_Analysis:Risk_Score action.risk.param._risk_object as Risk_Analysis:Risk_Object_Field action.risk.param._risk_object_type as Risk_Analysis:Risk_Object_Type | eval Guided_Mode:Enabled = if(Guided_Mode == 1, "Yes", "No") | eval Real-time_Scheduling_Enabled = if(realtime_schedule == 1, "Yes", "No") | table disabled Search_Name, Rule_Name, Application_Context, UI_Dispatch_Context, Description, Guided_Mode:Enabled, Guided_Mode:Data_Model, Guided_Mode:Dataset, Guided_Mode:Search_Logic, Search, Earliest_Time, Latest_Time, Cron_Schedule, Real-time_Scheduling_Enabled, Schedule_Window, Schedule_Priority, Trigger_Conditions:Trigger_Alert_When, Trigger_Conditions:Alert_Comparator, Trigger_Conditions:Alert_Threshold, Throttling:Window_Duration, Throttling:Fields_To_Group_By, Notable:Title, Notable:Description, Notable:Security_Domain, Notable:Severity, Notable:Default_Owner, Notable:Default_Status, Notable:Drill-down_Name, Notable:Drill-down_Search, Notable:Drill-down_Earliest_Offset, Notable:drill-down_Latest_Offset, Notable:Next_Steps, Risk_Analysis:Risk_Score, Risk_Analysis:Risk_Object_Field, Risk_Analysis:Risk_Object_Type |
Add a count of events by fieldname
The streamstats count command creates a field called eventCount that displays the amount of events from the fieldname you specify:
1 |
| streamstats count as eventCount by fieldname |
List all fields for an index
A few different queries / methods to list all fields for indexes.
1 |
index=yourindex| fieldsummary | table field |
or
1 |
index=yourindex | stats values(*) AS * | transpose | table column | rename column AS Fieldnames |
or
1 |
index=yourindex | stats dc() as * | transpose |
or ;-)
1 |
index=yourindex | table * |
Easter egg that created sample data
1 |
| windbag |
This command creates a set of sample data of 100 events
List of index available to your role
1 |
|tstats count WHERE index=* OR index=_ BY index |
Don’t forget time modifier is required
Fishies! Fun Query and Easter Egg
Here is a fun query that you may have seen as an Easter egg in an app. I stumbled on this while cleaning up old saved searches. If you know the app comment below! FYI make sure you run this in real time otherwise you won’t see the fun part :)
1 |
index=_* OR index=* | head 1 | eval fish="><((*>" | eval fishies=mvappend(fish,fish) | eval fishies=mvappend(fishies,fishies) | eval fishies=mvappend(fishies,fishies) | eval fishies=mvappend(fishies,fishies) | eval spawnify=fishies | mvexpand spawnify | eval fishies=mvjoin(fishies," ") | streamstats count as offset | eval offset=(offset*3) % 7 | addinfo | eval make_swim=round(info_max_time-info_search_time) | eval fishies=substr(fishies,(10*16)-(make_swim-offset+10),100+offset) | fields fishies | streamstats count | eval fishies=if(count==16,"FATAL ERROR: you have unleashed an army of fish.",fishies) | fields fishies | rename fishies as _raw | fields - _time | eval _raw=substr(_raw,0,100) |
Current Vulnerability Summary by Severity (tenable)
Having Tenable Security Center connected via the splunk plugin, this search gives an overview of all vulnerabilties, summarized by severity.
1 |
sourcetype="tenable:sc:vuln" severity.name=* | chart count over severity.name by ip |
Add the following to your dashboard source to add consistent colors to the pie chart: <option name=”charting.fieldColors”>{“Critical”:0x800000,”High”:0xFF0000,”Medium”:0xFFA500,”Low”:0x008000,”Info”:0x0000FF}</option>
Pearson Coefficient of Two Fields
The following SPL query calculates the Pearson coefficient of two fields named x and y.
1 2 3 4 5 6 |
index=* | fields x y | eval n=1 | eval xx=x*x | eval xy=x*y | eval yy=y*y | addcoltotals | tail 1 | eval rho_xy=(xy/n-x/n*y/n)/(sqrt(xx/n-(x/n)*(x/n))*sqrt(yy/n-(y/n)*(y/n))) | fields rho_xy |