Use this search to audit your correlation searches. It includes various information like who is the author of the correlation search, who modified it, etc. In addition to that, the search also gives you an brief info on whether the correlation search has been triggered in past 30 days or not considering it has notable […]
NIX Debian Package (dpkg.log) Dashboard
Description: Wanted a dashboard that would provide information around package information across my Ubuntu servers. At this time I have only built this dashboard to review the “dpkg.log”. In an attempt to help people understand how I build dashboard, posted a video on YouTube where you can follow along while I build this dashboard out: […]
Dashboard to measure Indexes and Sourcetypes, based upon first and last date of events
This dashboard will use REST API endpoints to grab a list of all indexes and then map out by sourcetype how many events when the first one was (based upon _time) and the last. Then does basic date math to show how long of a period that is as retention (though it does not show […]
Query to see incidents logged by correlation search in ES incident review dashboard
Query to see incidents logged by correlation search in ES incident review dashboard
1 2 |
| `incident_review` |search rule_name="<correlation_search_name>" |
REST Call for a list of Alert actions (Webhook_sms or Email or notable or ..)
Use this splunk search to get datails about alert actions
1 2 |
| rest /services/saved/searches splunk_server=local count=0 |table title,actions |
Reflected DDoS Attack
(in reflected attacks a lotof external benign src’s send a lotof packets toward our servers, because our server’s IP spoofed before in request packets and were sent by attacker toward trusted servers and those trusted servers replied us instead of attacker ! )
1 2 3 4 5 6 7 8 9 10 |
index=firewall dest=(your company IP range, for example: 184.192.0.0/16) (transport="udp" AND src_port IN(123,1900,0,53,5353,27015,19,20800,161,389,111,137,27005,520,6881,751,11211,1434,27960,17) AND src_port!=dest_port) OR ( (transport="tcp") AND src_port=80 AND dest_port!=80)) |bin _time span=5m |fields src_port,dest,src |stats count,dc(src) as src_count , dc(dest) as dest_count by src_port,_time |eval First_Factor=src_count/dest_count (in reflected attacks this ratio is to high!) |eval Final_Factor=First_Factor+count (the count of replies is another important factor ) |search Final_Factor>1200 |eval msg="Reflected DDoS Attack has been detected. "."count:".count." from ".src_count. " distict sources with same src_port:".src_port." on ". dest_count. " servers" |fields msg |
REST Call for a get details about Alert cron_schedules
Use this splunk search to show Alert’s cron_schedule details:
1 2 3 |
| rest /services/saved/searches splunk_server=local count=0 | search "cron_schedule"="*/*" |table title,cron_schedule,author |
Linux Deletion of SSL Certificate (mitre : T1485 , T1070.004 , T1070)
1 2 3 4 5 6 7 8 9 |
| tstats `summariesonly` count min(_time) as firstTime max(_time) as lastTime FROM datamodel=Endpoint.Filesystem where Filesystem.action=deleted Filesystem.file_path = "/etc/ssl/certs/*" Filesystem.file_path IN ("*.pem", "*.crt") by _time span=1h Filesystem.file_name Filesystem.file_path Filesystem.dest Filesystem.process_guid Filesystem.action | `drop_dm_object_name(Filesystem)` |rename process_guid as proc_guid |join proc_guid, _time [ | tstats `summariesonly` count FROM datamodel=Endpoint.Processes where Processes.parent_process_name != unknown by _time span=1h Processes.process_id Processes.process_name Processes.process Processes.dest Processes.parent_process_name Processes.parent_process Processes.process_path Processes.process_guid | `drop_dm_object_name(Processes)` |rename process_guid as proc_guid | fields _time dest user parent_process_name parent_process process_name process_path process proc_guid registry_path registry_value_name registry_value_data registry_key_name action] | table process_name process proc_guid file_name file_path action _time parent_process_name parent_process process_path dest user |
port scan attack (by juniper)
1 2 3 4 |
index=* sourcetype="juniper:firewall" src!="192.168.*" | bin _time span=5m | stats dc(dest_port) as distinct_port by src,dest,_time |where distinct_port >1000 |
DLL Serach Oreder Hijacking (mitre : T1574.001)
1 2 3 4 5 |
index=* ((((EventCode="4688" OR EventCode="1") AND ((CommandLine="*reg*" CommandLine="*add*" CommandLine="*/d*") OR (CommandLine="*Set-ItemProperty*" CommandLine="*-value*")) AND (CommandLine="*00000000*" OR CommandLine="*0*") AND CommandLine="*SafeDllSearchMode*") OR ((EventCode="4657") ObjectValueName="SafeDllSearchMode" value="0")) OR ((EventCode="13") EventType="SetValue" TargetObject="*SafeDllSearchMode" Details="DWORD (0x00000000)")) | fields EventCode,EventType,TargetObject,Details,CommandLine,ObjectValueName,value |
Find where actual hostnames don’t match the host from the Universal Forwarder
Description: This will provide a list of hosts that don’t match the actual host names. This will allow you to find the hosts/IP addresses that need to have the clonefix actions ran against them This can probably be written better to account for host names that include an underscore in them. Requires access to _internal […]
1st time connection between servers (FTD CISCO)
Description: This query helps you to see all new connections between servers. Still work in progress and can be extended further. “White-listing” happens through the lookup files. Query:
1 2 3 4 5 6 7 8 9 10 11 |
index=nfw "Allow" | rex (?:SrcIP.*\b(?<SrcIP>\d+\.\d+\.\d+\.\d+).*DstIP.*\b(?<DstIP>\d+\.\d+\.\d+\.\d+)) | stats count min(_time) AS earliest max(_time) AS maxtime BY SrcIP, DstIP | where earliest>relative_time(now(), "-1d@d") AND count<=1 | search DstIP=10.0.0.0/8 AND NOT [| inputlookup networkdestip.csv | fields DstIP] | search SrcIP=10.0.0.0/8 AND NOT [| inputlookup networksrcip.csv | fields SrcIP] | fields SrcIP, DstIP |
Show all successful splunk configuration changes by user
1 2 3 4 5 |
index=_audit action=edit* info=granted operation!=list host= object=* | transaction action user operation host maxspan=30s | stats values(action) as action values(object) as modified_object by _time,operation,user,host | rename user as modified_by | table _time action modified_object modified_by |
Netflow Activity dashboard showing MB’s in to dest_ip
Description: Dashboard that helps me understand activity in my home lab looking at netflow data from my OPNsense firewall. This dashboard starts with a simple timechart that gives me a trend of average mb_in across all of my devices. I have OPNsense configured to send netflow data v9 to a Splunk independent stream forward which […]
Truncated Data Issues
Displays sourcetypes being truncated on ingest, then on selection, shows the related _internal message & the an event that caused it to trigger.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 |
<form> <label>Data Issues</label> <description>Truncation, Date Parsing and Timestamp issues</description> <fieldset submitButton="false"> <input type="time" token="field1"> <label></label> <default> <earliest>-24h@h</earliest> <latest>now</latest> </default> </input> </fieldset> <row> <panel> <title>Choose a problematic sourcetype</title> <table> <search> <query>index=_internal sourcetype=splunkd component=LineBreakingProcessor | extract | rex "because\slimit\sof\s(?<limit>\S+).*>=\s(?<actual>\S+)" | stats count avg(actual) max(actual) dc(data_source) dc(data_host) BY data_sourcetype, limit | eval avg(actual)=round('avg(actual)') | sort - count</query> <earliest>$field1.earliest$</earliest> <latest>$field1.latest$</latest> <sampleRatio>1</sampleRatio> </search> <option name="count">100</option> <option name="dataOverlayMode">none</option> <option name="drilldown">cell</option> <option name="percentagesRow">false</option> <option name="refresh.display">progressbar</option> <option name="rowNumbers">false</option> <option name="totalsRow">false</option> <option name="wrap">true</option> <drilldown> <set token="form.data_sourcetype">$row.data_sourcetype$</set> <set token="form.limit">$row.limit$</set> </drilldown> </table> </panel> </row> <row> <panel depends="$form.data_sourcetype$"> <title>Event in _internal</title> <table> <search> <query>index=_internal sourcetype=splunkd component=LineBreakingProcessor data_sourcetype="$form.data_sourcetype$" | extract | rex "because\slimit\sof\s(?<limit>\S+).*>=\s(?<actual>\S+)" | fields _raw _time data_sourcetype limit</query> <earliest>$field1.earliest$</earliest> <latest>$field1.latest$</latest> </search> <option name="count">10</option> <option name="drilldown">none</option> <option name="refresh.display">progressbar</option> </table> </panel> <panel depends="$form.data_sourcetype$"> <title>Event that reaches the limit</title> <event> <search> <query>index=* OR index=_* sourcetype=$form.data_sourcetype$ | eval length=len(_raw) |search length=$form.limit$</query> <earliest>$field1.earliest$</earliest> <latest>$field1.latest$</latest> </search> <option name="list.drilldown">none</option> <option name="refresh.display">progressbar</option> </event> </panel> </row> </form> |
NIX Login Dashboard with Success, Failed and Sudo activity
Description: Built this dashboard to display login activity for my *nix host devices. At the top you have a box called “Filter” that allows you to insert search parameters in the base search (ex: user=thall). Each panel has it’s own “TimeRangePicker” and a “Multiselect input” which allows you to decide what fields to add to […]
List the size of lookup files with an SPL search.
1 2 3 4 5 6 7 8 9 |
| rest splunk_server=local /services/data/lookup-table-files/ | rename eai:acl.app as app | table app title | search NOT title IN (*.kmz) | map maxsearches=990 search="| inputlookup $title$ | eval size=0 | foreach * [ eval size=size+coalesce(len('<<FIELD>>'),0), app=\"$app$\", title=$title$ | fields app title size]" | stats sum(size) by app title | sort - sum(size) |
Detect Credit Card Numbers using Luhn Algorithm
Description Detect if any log file in Splunk contains Credit Card numbers.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 |
index=* ((source IN("*.log","*.bak","*.txt", "*.csv","/tmp*","/temp*","c:\tmp*")) OR (tag=web dest_content=*)) | eval comment="Match against the simple CC regex to narrow down the events in the lookup" | rex max_match=1 "[\"\s\'\,]{0,1}(?<CCMatch>[\d.\-\s]{11,24})[\"\s\'\,]{0,1}" | where isnotnull(CCMatch) | eval comment="Apply the LUHN algorithm to see if the CC number extracted is valid" | eval cc=tonumber(replace(CCMatch,"[ -\.]","")) | eval comment="Lower min to 11 to find additional CCs which may pick up POSIX timestamps as well." | where len(cc)>=14 AND len(cc)<=16 | eval cc=printf("%024d", cc) | eval ccd=split(cc,"") | foreach 23 22 21 20 19 18 17 16 15 14 13 12 11 10 9 8 7 6 5 4 3 2 1 0 [ | eval ccd_reverse=mvappend(ccd_reverse,mvindex(ccd,<<FIELD>>)) ] | rename ccd_reverse AS ccd | eval cce=mvappend(mvindex(ccd,0),mvindex(ccd,2),mvindex(ccd,4),mvindex(ccd,6),mvindex(ccd,8),mvindex(ccd,10),mvindex(ccd,12),mvindex(ccd,14),mvindex(ccd,16),mvindex(ccd,18),mvindex(ccd,20),mvindex(ccd,22),mvindex(ccd,24)) | eval cco=mvappend(mvindex(ccd,1),mvindex(ccd,3),mvindex(ccd,5),mvindex(ccd,7),mvindex(ccd,9),mvindex(ccd,11),mvindex(ccd,13),mvindex(ccd,15),mvindex(ccd,17),mvindex(ccd,19),mvindex(ccd,21),mvindex(ccd,23)) | eval cco2=mvmap(cco,cco*2) | eval cco2HT10=mvfilter(cco2>9) | eval cco2LT10=mvfilter(cco2<=9) | eval cco2LH10dt=mvmap(cco2HT10,cco2HT10-9) | fillnull value=0 cco2LT10 cco2LH10dt | eventstats sum(cce) as t1 sum(cco2LT10) as t2 sum(cco2LH10dt) as t3 BY cc | eval totalChecker=t1+t2+t3 | eval CCIsValid=if((totalChecker%10)=0,"true","false") | fields - cc ccd cce cco cco2 cco2HT10 cco2LT10 cco2LH10dt t1 t2 t3 totalChecker raw time | where CCIsValid="true" | eval comment="Find the field where we found the CC number" | foreach _raw * [ | eval CCStringField=if("<<FIELD>>"!="CCMatch" AND like('<<FIELD>>',"%".CCMatch."%"),"<<FIELD>>",CCStringField) ] | table _time CCMatch CCStringField source sourcetype host src dest http_user_agent |
Indexes size and EPS
Description: SPL request to display by index : Index name Index size Events sum, min, avg, max, perc95 Events sum, min, avg, max, perc95 to work hours (8am-6pm) Required: Splunk license Query:
1 2 3 4 5 6 7 8 9 10 11 12 |
index=_internal source=*license_usage.log idx=z* | fields b idx _time| eval GB=b/1024/1024/1024, index=idx | stats sum(GB) as "Volume GB" by index | append extendtimerange=t [| tstats count where index=z* by _time index span=1s | stats min(count) AS "min EPS", avg(count) AS "avg EPS", max(count) AS "max EPS", sum(count) AS "sum evts", perc95(count) AS "perc95 EPS" by index] | append extendtimerange=t [| tstats count where index=z* by _time index span=1s | eval date_hour=strftime(_time, "%H") | search date_hour>7 AND date_hour<19 | stats min(count) AS "min EPS WH", avg(count) AS "avg EPS WH", max(count) AS "max EPS WH", perc95(count) AS "perc95 EPS WH" by index] | stats first(*) as * by index | eval "avg EPS" = round ( 'avg EPS', 2), "perc95 EPS" = round ('perc95 EPS',2), "Volume GB" = round ('Volume GB',2) , "avg EPS WH" = round ( 'avg EPS WH', 2), "perc95 EPS WH" = round ('perc95 EPS WH',2), "sizeGB by evt"=('Volume GB'/'sum evts'), "sizeB by evt"=(('Volume GB'/'sum evts')*1024*1024*1024) | table index, "Volume GB","sum evts","sizeGB by evt","sizeB by evt", "min EPS", "min EPS WH", "avg EPS","avg EPS WH", "perc95 EPS","perc95 EPS WH", "max EPS", "max EPS WH" |
Software inventory
I’ve been looking a while for something like this, and decided to make it myself. This relies on the tinv_software _inventory add-on found on Splunkbase, but you can do it without, if you feel like it.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 |
<form> <label>Software Inventory</label> <fieldset submitButton="false" autoRun="false"> <input type="dropdown" token="software_picker" searchWhenChanged="true"> <label>Software</label> <choice value=""falcon-sensor" "Crowdstrike Windows Sensor"">Crowdstrike</choice> <choice value=""*qualys*"">Qualys</choice> <choice value=""*SecureConnector*"">Forescout</choice> <prefix>tinv_software_name IN (</prefix> <suffix>)</suffix> <default>"falcon-sensor" "Crowdstrike Windows Sensor"</default> </input> <input type="dropdown" token="environment_picker" searchWhenChanged="true"> <label>Environment</label> <choice value="On-Prem">On-Prem</choice> <choice value="AWS">AWS</choice> <choice value="env2">env2</choice> <choice value="env3">env3</choice> <choice value="env4">env4</choice> <prefix>Environment IN (</prefix> <suffix>)</suffix> <default>On-Prem</default> </input> <input type="dropdown" token="os_picker" searchWhenChanged="true"> <label>Operating System</label> <choice value="windows">Windows</choice> <choice value="unix">Linux</choice> <default>windows</default> </input> </fieldset> <row> <panel> <table> <search> <query>| tstats count where index IN ($os_picker$) host!=*.txt by host | eval host=lower(host) | eval Environment=case(host LIKE "%desktop%" OR host LIKE "%z1-%" OR host LIKE "ec2%" OR host LIKE "%z2-%" OR host LIKE "%z-%" OR host LIKE "%z3-%" OR host LIKE "i-%", "AWS", host LIKE "cc%", "Communicorp",host LIKE "%win%" OR host LIKE "%awn%", "Argus", host LIKE "%empoweredbenefits.com", "Empowered Benefits",1=1,"On-Prem") | search $environment_picker$ | join host type=outer [| search index=$os_picker$ tag=software tag=inventory $software_picker$ | eval host=lower(host) | fields host tinv_software_name tinv_software_version ] | fillnull value="-" tinv_software_name | rename tinv_software_name AS "Software Name" tinv_software_version AS "Version" | fields host "Software Name" "Version" Environment | sort -tinv_software_name</query> <earliest>-24h@h</earliest> <latest>now</latest> <sampleRatio>1</sampleRatio> </search> <option name="count">50</option> <option name="dataOverlayMode">none</option> <option name="drilldown">none</option> <option name="percentagesRow">false</option> <option name="refresh.display">progressbar</option> <option name="rowNumbers">true</option> <option name="totalsRow">false</option> <option name="wrap">true</option> </table> </panel> </row> </form> |
Hope this helps. Let me know if you have any suggestions.