1 2 3 4 5 6 7 8 9 |
| tstats `summariesonly` count min(_time) as firstTime max(_time) as lastTime FROM datamodel=Endpoint.Filesystem where Filesystem.action=deleted Filesystem.file_path = "/etc/ssl/certs/*" Filesystem.file_path IN ("*.pem", "*.crt") by _time span=1h Filesystem.file_name Filesystem.file_path Filesystem.dest Filesystem.process_guid Filesystem.action | `drop_dm_object_name(Filesystem)` |rename process_guid as proc_guid |join proc_guid, _time [ | tstats `summariesonly` count FROM datamodel=Endpoint.Processes where Processes.parent_process_name != unknown by _time span=1h Processes.process_id Processes.process_name Processes.process Processes.dest Processes.parent_process_name Processes.parent_process Processes.process_path Processes.process_guid | `drop_dm_object_name(Processes)` |rename process_guid as proc_guid | fields _time dest user parent_process_name parent_process process_name process_path process proc_guid registry_path registry_value_name registry_value_data registry_key_name action] | table process_name process proc_guid file_name file_path action _time parent_process_name parent_process process_path dest user |
port scan attack (by juniper)
1 2 3 4 |
index=* sourcetype="juniper:firewall" src!="192.168.*" | bin _time span=5m | stats dc(dest_port) as distinct_port by src,dest,_time |where distinct_port >1000 |
DLL Serach Oreder Hijacking (mitre : T1574.001)
1 2 3 4 5 |
index=* ((((EventCode="4688" OR EventCode="1") AND ((CommandLine="*reg*" CommandLine="*add*" CommandLine="*/d*") OR (CommandLine="*Set-ItemProperty*" CommandLine="*-value*")) AND (CommandLine="*00000000*" OR CommandLine="*0*") AND CommandLine="*SafeDllSearchMode*") OR ((EventCode="4657") ObjectValueName="SafeDllSearchMode" value="0")) OR ((EventCode="13") EventType="SetValue" TargetObject="*SafeDllSearchMode" Details="DWORD (0x00000000)")) | fields EventCode,EventType,TargetObject,Details,CommandLine,ObjectValueName,value |
Find where actual hostnames don’t match the host from the Universal Forwarder
Description: This will provide a list of hosts that don’t match the actual host names. This will allow you to find the hosts/IP addresses that need to have the clonefix actions ran against them This can probably be written better to account for host names that include an underscore in them. Requires access to _internal […]
1st time connection between servers (FTD CISCO)
Description: This query helps you to see all new connections between servers. Still work in progress and can be extended further. “White-listing” happens through the lookup files. Query:
1 2 3 4 5 6 7 8 9 10 11 |
index=nfw "Allow" | rex (?:SrcIP.*\b(?<SrcIP>\d+\.\d+\.\d+\.\d+).*DstIP.*\b(?<DstIP>\d+\.\d+\.\d+\.\d+)) | stats count min(_time) AS earliest max(_time) AS maxtime BY SrcIP, DstIP | where earliest>relative_time(now(), "-1d@d") AND count<=1 | search DstIP=10.0.0.0/8 AND NOT [| inputlookup networkdestip.csv | fields DstIP] | search SrcIP=10.0.0.0/8 AND NOT [| inputlookup networksrcip.csv | fields SrcIP] | fields SrcIP, DstIP |
Show all successful splunk configuration changes by user
1 2 3 4 5 |
index=_audit action=edit* info=granted operation!=list host= object=* | transaction action user operation host maxspan=30s | stats values(action) as action values(object) as modified_object by _time,operation,user,host | rename user as modified_by | table _time action modified_object modified_by |
Netflow Activity dashboard showing MB’s in to dest_ip
Description: Dashboard that helps me understand activity in my home lab looking at netflow data from my OPNsense firewall. This dashboard starts with a simple timechart that gives me a trend of average mb_in across all of my devices. I have OPNsense configured to send netflow data v9 to a Splunk independent stream forward which […]
Truncated Data Issues
Displays sourcetypes being truncated on ingest, then on selection, shows the related _internal message & the an event that caused it to trigger.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 |
<form> <label>Data Issues</label> <description>Truncation, Date Parsing and Timestamp issues</description> <fieldset submitButton="false"> <input type="time" token="field1"> <label></label> <default> <earliest>-24h@h</earliest> <latest>now</latest> </default> </input> </fieldset> <row> <panel> <title>Choose a problematic sourcetype</title> <table> <search> <query>index=_internal sourcetype=splunkd component=LineBreakingProcessor | extract | rex "because\slimit\sof\s(?<limit>\S+).*>=\s(?<actual>\S+)" | stats count avg(actual) max(actual) dc(data_source) dc(data_host) BY data_sourcetype, limit | eval avg(actual)=round('avg(actual)') | sort - count</query> <earliest>$field1.earliest$</earliest> <latest>$field1.latest$</latest> <sampleRatio>1</sampleRatio> </search> <option name="count">100</option> <option name="dataOverlayMode">none</option> <option name="drilldown">cell</option> <option name="percentagesRow">false</option> <option name="refresh.display">progressbar</option> <option name="rowNumbers">false</option> <option name="totalsRow">false</option> <option name="wrap">true</option> <drilldown> <set token="form.data_sourcetype">$row.data_sourcetype$</set> <set token="form.limit">$row.limit$</set> </drilldown> </table> </panel> </row> <row> <panel depends="$form.data_sourcetype$"> <title>Event in _internal</title> <table> <search> <query>index=_internal sourcetype=splunkd component=LineBreakingProcessor data_sourcetype="$form.data_sourcetype$" | extract | rex "because\slimit\sof\s(?<limit>\S+).*>=\s(?<actual>\S+)" | fields _raw _time data_sourcetype limit</query> <earliest>$field1.earliest$</earliest> <latest>$field1.latest$</latest> </search> <option name="count">10</option> <option name="drilldown">none</option> <option name="refresh.display">progressbar</option> </table> </panel> <panel depends="$form.data_sourcetype$"> <title>Event that reaches the limit</title> <event> <search> <query>index=* OR index=_* sourcetype=$form.data_sourcetype$ | eval length=len(_raw) |search length=$form.limit$</query> <earliest>$field1.earliest$</earliest> <latest>$field1.latest$</latest> </search> <option name="list.drilldown">none</option> <option name="refresh.display">progressbar</option> </event> </panel> </row> </form> |
NIX Login Dashboard with Success, Failed and Sudo activity
Description: Built this dashboard to display login activity for my *nix host devices. At the top you have a box called “Filter” that allows you to insert search parameters in the base search (ex: user=thall). Each panel has it’s own “TimeRangePicker” and a “Multiselect input” which allows you to decide what fields to add to […]
List the size of lookup files with an SPL search.
1 2 3 4 5 6 7 8 9 |
| rest splunk_server=local /services/data/lookup-table-files/ | rename eai:acl.app as app | table app title | search NOT title IN (*.kmz) | map maxsearches=990 search="| inputlookup $title$ | eval size=0 | foreach * [ eval size=size+coalesce(len('<<FIELD>>'),0), app=\"$app$\", title=$title$ | fields app title size]" | stats sum(size) by app title | sort - sum(size) |
Detect Credit Card Numbers using Luhn Algorithm
Description Detect if any log file in Splunk contains Credit Card numbers.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 |
index=* ((source IN("*.log","*.bak","*.txt", "*.csv","/tmp*","/temp*","c:\tmp*")) OR (tag=web dest_content=*)) | eval comment="Match against the simple CC regex to narrow down the events in the lookup" | rex max_match=1 "[\"\s\'\,]{0,1}(?<CCMatch>[\d.\-\s]{11,24})[\"\s\'\,]{0,1}" | where isnotnull(CCMatch) | eval comment="Apply the LUHN algorithm to see if the CC number extracted is valid" | eval cc=tonumber(replace(CCMatch,"[ -\.]","")) | eval comment="Lower min to 11 to find additional CCs which may pick up POSIX timestamps as well." | where len(cc)>=14 AND len(cc)<=16 | eval cc=printf("%024d", cc) | eval ccd=split(cc,"") | foreach 23 22 21 20 19 18 17 16 15 14 13 12 11 10 9 8 7 6 5 4 3 2 1 0 [ | eval ccd_reverse=mvappend(ccd_reverse,mvindex(ccd,<<FIELD>>)) ] | rename ccd_reverse AS ccd | eval cce=mvappend(mvindex(ccd,0),mvindex(ccd,2),mvindex(ccd,4),mvindex(ccd,6),mvindex(ccd,8),mvindex(ccd,10),mvindex(ccd,12),mvindex(ccd,14),mvindex(ccd,16),mvindex(ccd,18),mvindex(ccd,20),mvindex(ccd,22),mvindex(ccd,24)) | eval cco=mvappend(mvindex(ccd,1),mvindex(ccd,3),mvindex(ccd,5),mvindex(ccd,7),mvindex(ccd,9),mvindex(ccd,11),mvindex(ccd,13),mvindex(ccd,15),mvindex(ccd,17),mvindex(ccd,19),mvindex(ccd,21),mvindex(ccd,23)) | eval cco2=mvmap(cco,cco*2) | eval cco2HT10=mvfilter(cco2>9) | eval cco2LT10=mvfilter(cco2<=9) | eval cco2LH10dt=mvmap(cco2HT10,cco2HT10-9) | fillnull value=0 cco2LT10 cco2LH10dt | eventstats sum(cce) as t1 sum(cco2LT10) as t2 sum(cco2LH10dt) as t3 BY cc | eval totalChecker=t1+t2+t3 | eval CCIsValid=if((totalChecker%10)=0,"true","false") | fields - cc ccd cce cco cco2 cco2HT10 cco2LT10 cco2LH10dt t1 t2 t3 totalChecker raw time | where CCIsValid="true" | eval comment="Find the field where we found the CC number" | foreach _raw * [ | eval CCStringField=if("<<FIELD>>"!="CCMatch" AND like('<<FIELD>>',"%".CCMatch."%"),"<<FIELD>>",CCStringField) ] | table _time CCMatch CCStringField source sourcetype host src dest http_user_agent |
Indexes size and EPS
Description: SPL request to display by index : Index name Index size Events sum, min, avg, max, perc95 Events sum, min, avg, max, perc95 to work hours (8am-6pm) Required: Splunk license Query:
1 2 3 4 5 6 7 8 9 10 11 12 |
index=_internal source=*license_usage.log idx=z* | fields b idx _time| eval GB=b/1024/1024/1024, index=idx | stats sum(GB) as "Volume GB" by index | append extendtimerange=t [| tstats count where index=z* by _time index span=1s | stats min(count) AS "min EPS", avg(count) AS "avg EPS", max(count) AS "max EPS", sum(count) AS "sum evts", perc95(count) AS "perc95 EPS" by index] | append extendtimerange=t [| tstats count where index=z* by _time index span=1s | eval date_hour=strftime(_time, "%H") | search date_hour>7 AND date_hour<19 | stats min(count) AS "min EPS WH", avg(count) AS "avg EPS WH", max(count) AS "max EPS WH", perc95(count) AS "perc95 EPS WH" by index] | stats first(*) as * by index | eval "avg EPS" = round ( 'avg EPS', 2), "perc95 EPS" = round ('perc95 EPS',2), "Volume GB" = round ('Volume GB',2) , "avg EPS WH" = round ( 'avg EPS WH', 2), "perc95 EPS WH" = round ('perc95 EPS WH',2), "sizeGB by evt"=('Volume GB'/'sum evts'), "sizeB by evt"=(('Volume GB'/'sum evts')*1024*1024*1024) | table index, "Volume GB","sum evts","sizeGB by evt","sizeB by evt", "min EPS", "min EPS WH", "avg EPS","avg EPS WH", "perc95 EPS","perc95 EPS WH", "max EPS", "max EPS WH" |
Software inventory
I’ve been looking a while for something like this, and decided to make it myself. This relies on the tinv_software _inventory add-on found on Splunkbase, but you can do it without, if you feel like it.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 |
<form> <label>Software Inventory</label> <fieldset submitButton="false" autoRun="false"> <input type="dropdown" token="software_picker" searchWhenChanged="true"> <label>Software</label> <choice value=""falcon-sensor" "Crowdstrike Windows Sensor"">Crowdstrike</choice> <choice value=""*qualys*"">Qualys</choice> <choice value=""*SecureConnector*"">Forescout</choice> <prefix>tinv_software_name IN (</prefix> <suffix>)</suffix> <default>"falcon-sensor" "Crowdstrike Windows Sensor"</default> </input> <input type="dropdown" token="environment_picker" searchWhenChanged="true"> <label>Environment</label> <choice value="On-Prem">On-Prem</choice> <choice value="AWS">AWS</choice> <choice value="env2">env2</choice> <choice value="env3">env3</choice> <choice value="env4">env4</choice> <prefix>Environment IN (</prefix> <suffix>)</suffix> <default>On-Prem</default> </input> <input type="dropdown" token="os_picker" searchWhenChanged="true"> <label>Operating System</label> <choice value="windows">Windows</choice> <choice value="unix">Linux</choice> <default>windows</default> </input> </fieldset> <row> <panel> <table> <search> <query>| tstats count where index IN ($os_picker$) host!=*.txt by host | eval host=lower(host) | eval Environment=case(host LIKE "%desktop%" OR host LIKE "%z1-%" OR host LIKE "ec2%" OR host LIKE "%z2-%" OR host LIKE "%z-%" OR host LIKE "%z3-%" OR host LIKE "i-%", "AWS", host LIKE "cc%", "Communicorp",host LIKE "%win%" OR host LIKE "%awn%", "Argus", host LIKE "%empoweredbenefits.com", "Empowered Benefits",1=1,"On-Prem") | search $environment_picker$ | join host type=outer [| search index=$os_picker$ tag=software tag=inventory $software_picker$ | eval host=lower(host) | fields host tinv_software_name tinv_software_version ] | fillnull value="-" tinv_software_name | rename tinv_software_name AS "Software Name" tinv_software_version AS "Version" | fields host "Software Name" "Version" Environment | sort -tinv_software_name</query> <earliest>-24h@h</earliest> <latest>now</latest> <sampleRatio>1</sampleRatio> </search> <option name="count">50</option> <option name="dataOverlayMode">none</option> <option name="drilldown">none</option> <option name="percentagesRow">false</option> <option name="refresh.display">progressbar</option> <option name="rowNumbers">true</option> <option name="totalsRow">false</option> <option name="wrap">true</option> </table> </panel> </row> </form> |
Hope this helps. Let me know if you have any suggestions.
DNS search for encoded data
Description: Use this Splunk search to find Base64 encoded content in DNS queries. The goal is to examine the DNS query field of the dns events to find subdomain streams that contain only Base64 valid characters. Utilizing DNS queries with encoded information is a known method to exfiltrate data. But you do not know if […]
Show cron frequency and scheduling of all scheduled searches
This search shows you all scheduled searches and their respective cron frequency and cron schedule. This also helps finding frequently running saved searches.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 |
| rest splunk_server=local "/servicesNS/-/-/saved/searches/" search="is_scheduled=1" search="disabled=0" | fields title, cron_schedule, eai:acl.app | rename title as savedsearch_name | eval pieces=split(cron_schedule, " ") | eval c_min=mvindex(pieces, 0), c_h=mvindex(pieces, 1), c_d=mvindex(pieces, 2), c_mday=mvindex(pieces, 3), c_wday=mvindex(pieces, 4) | eval c_min_div=if(match(c_min, "/"), replace(c_min, "^.*/(\d+)$", "\1"), null()) | eval c_mins=if(match(c_min, ","), split(c_min, ","), null()) | eval c_min_div=if(isnotnull(c_mins), abs(tonumber(mvindex(c_mins, 1)) - tonumber(mvindex(c_mins, 0))), c_min_div) | eval c_hs=if(match(c_h, ","), split(c_h, ","), null()) | eval c_h_div=case(match(c_h, "/"), replace(c_h, "^.*/(\d+)$", "\1"), isnotnull(c_hs), abs(tonumber(mvindex(c_hs, 1)) - tonumber(mvindex(c_hs, 0))), 1=1, null()) | eval c_wdays=if(match(c_wday, ","), split(c_wday, ","), null()) | eval c_wday_div=case(match(c_wday, "/"), replace(c_wday, "^.*/(\d+)$", "\1"), isnotnull(c_wdays), abs(tonumber(mvindex(c_wdays, 1)) - tonumber(mvindex(c_wdays, 0))), 1=1, null()) | eval i_m=case(c_d < 29, 86400 * 28, c_d = 31, 86400 * 31, 1=1, null()) | eval i_h=case(isnotnull(c_h_div), c_h_div * 3600, c_h = "*", null(), match(c_h, "^\d+$"), 86400) | eval i_min=case(isnotnull(c_min_div), c_min_div * 60, c_min = "*", 60, match(c_min, "^\d+$"), 3600) | eval i_wk=case(isnotnull(c_wday_div), c_wday_div * 86400, c_wday = "*", null(), match(c_wday, "^\d+$"), 604800) | eval cron_minimum_freq=case(isnotnull(i_m), i_m, isnotnull(i_wk) AND isnotnull(c_min_div), i_min, isnotnull(i_wk) AND isnull(c_min_div), i_wk, isnotnull(i_h), i_h, 1=1, min(i_min)) | fields - c_d c_h c_hs c_h_div c_mday c_min c_min_div c_mins c_wday c_wdays c_wday_div pieces i_m i_min i_h i_wk | fields savedsearch_name cron_minimum_freq cron_schedule eai:acl.app |
Data model Acceleration Details
This Splunk Search shows you a lot of good information about your data model acceleration and performance.
1 |
| rest /services/admin/summarization by_tstats=t splunk_server=local count=0 <br />| eval key=replace(title,(("tstats:DM_" . 'eai:acl.app') . "_"),""), datamodel=replace('summary.id',(("DM_" . 'eai:acl.app') . "_"),"") <br />| join type=left key <br /> [| rest /services/data/models splunk_server=local count=0 <br /> | table title, "acceleration.cron_schedule", "eai:digest" <br /> | rename title as key <br /> | rename "acceleration.cron_schedule" as cron] <br />| table datamodel, "eai:acl.app", "summary.access_time", "summary.is_inprogress", "summary.size", "summary.latest_time", "summary.complete", "summary.buckets_size", "summary.buckets", cron, "summary.last_error", "summary.time_range", "summary.id", "summary.mod_time", "eai:digest", "summary.earliest_time", "summary.last_sid", "summary.access_count" <br />| rename "summary.id" as summary_id, "summary.time_range" as retention, "summary.earliest_time" as earliest, "summary.latest_time" as latest, "eai:digest" as digest <br />| rename "summary.*" as "*", "eai:acl.*" as "*" <br />| sort datamodel <br />| rename access_count as "Datamodel_Acceleration.access_count", access_time as "Datamodel_Acceleration.access_time", app as "Datamodel_Acceleration.app", buckets as "Datamodel_Acceleration.buckets", buckets_size as "Datamodel_Acceleration.buckets_size", cron as "Datamodel_Acceleration.cron", complete as "Datamodel_Acceleration.complete", datamodel as "Datamodel_Acceleration.datamodel", digest as "Datamodel_Acceleration.digest", earliest as "Datamodel_Acceleration.earliest", is_inprogress as "Datamodel_Acceleration.is_inprogress", last_error as "Datamodel_Acceleration.last_error", last_sid as "Datamodel_Acceleration.last_sid", latest as "Datamodel_Acceleration.latest", mod_time as "Datamodel_Acceleration.mod_time", retention as "Datamodel_Acceleration.retention", size as "Datamodel_Acceleration.size", summary_id as "Datamodel_Acceleration.summary_id" <br />| fields + "Datamodel_Acceleration.access_count", "Datamodel_Acceleration.access_time", "Datamodel_Acceleration.app", "Datamodel_Acceleration.buckets", "Datamodel_Acceleration.buckets_size", "Datamodel_Acceleration.cron", "Datamodel_Acceleration.complete", "Datamodel_Acceleration.datamodel", "Datamodel_Acceleration.digest", "Datamodel_Acceleration.earliest", "Datamodel_Acceleration.is_inprogress", "Datamodel_Acceleration.last_error", "Datamodel_Acceleration.last_sid", "Datamodel_Acceleration.latest", "Datamodel_Acceleration.mod_time", "Datamodel_Acceleration.retention", "Datamodel_Acceleration.size", "Datamodel_Acceleration.summary_id" <br />| rename "Datamodel_Acceleration.*" as "*" <br />| join type=outer last_sid <br /> [| rest splunk_server=local count=0 /services/search/jobs reportSearch=summarize* <br /> | rename sid as last_sid <br /> | fields + last_sid, runDuration] <br />| eval "size(MB)"=round((size / 1048576),1), "retention(days)"=if((retention == 0),"unlimited",round((retention / 86400),1)), "complete(%)"=round((complete * 100),1), "runDuration(s)"=round(runDuration,1) <br />| sort 100 + datamodel <br />| table datamodel, app, cron, "retention(days)", earliest, latest, is_inprogress, "complete(%)", "size(MB)", "runDuration(s)", last_error |
Splunk CIM Assist
Got tired of having to go through each data source to determine what indexes should go into the Splunk_SA_CIM search macros, this does the leg work.
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 |
index=* | fields index, tag, user, action, object_category | eval datamodel = if(tag="alert", index."."."alert", datamodel) | eval datamodel = if(tag="listening" AND tag="port", index."."."application_state_deprecated"."."."endpoint", datamodel) | eval datamodel = if(tag="process" AND tag="report", index."."."application_state_deprecated"."."."endpoint", datamodel) | eval datamodel = if(tag="service" AND tag="report", index."."."application_state_deprecated"."."."endpoint", datamodel) | eval datamodel = if(tag="authentication" AND action!="success" AND user!="*$", index."."."authentication", datamodel) | eval datamodel = if(tag="certificate", index."."."certificates", datamodel) | eval datamodel = if(tag="change" AND NOT (object_category=file OR object_category=directory OR object_category=registry), index."."."change"."."."change_analysis_deprecated", datamodel) | eval datamodel = if(tag="dlp" AND tag="incident", index."."."data_loss_prevention", datamodel) | eval datamodel = if(tag="database", index."."."database", datamodel) | eval datamodel = if(tag="email", index."."."email", datamodel) | eval datamodel = if(tag="endpoint" AND tag="filesystem", index."."."endpoint", datamodel) | eval datamodel = if(tag="endpoint" AND tag="registry", index."."."endpoint", datamodel) | eval datamodel = if(tag="track_event_signatures" AND (signature="*" OR signature_id="*"), index."."."event_signatures", datamodel) | eval datamodel = if(tag="messaging", index."."."interprocess_messaging", datamodel) | eval datamodel = if(tag="ids" AND tag="attack", index."."."intrusion_detection", datamodel) | eval datamodel = if(tag="inventory" AND (tag="cpu" OR tag="memory" OR tag="network" OR tag="storage" OR (tag="system" AND tag="version") OR tag="user" OR tag="virtual"), index."."."inventory", datamodel) | eval datamodel = if(tag="jvm", index."."."jvm", datamodel) | eval datamodel = if(tag="malware" AND tag="attack", index."."."malware", datamodel) | eval datamodel = if(tag="network" AND tag="resolution" AND tag="dns", index."."."network_resolution_dns", datamodel) | eval datamodel = if(tag="network" AND tag="session", index."."."network_sessions", datamodel) | eval datamodel = if(tag="network" AND tag="communicate", index."."."network_traffic", datamodel) | eval datamodel = if(tag="performance" AND (tag="cpu" OR tag="facilities" OR tag="memory" OR tag="storage" OR tag="network" OR (tag="os" AND ((tag="time" AND tag="synchronize") OR tag="uptime"))), index."."."performance", datamodel) | eval datamodel = if(tag="ticketing", index."."."ticket_managment", datamodel) | eval datamodel = if(tag="update" AND tag="status", index."."."updates", datamodel) | eval datamodel = if(tag="vulnerability" AND tag="report", index."."."vulnerabilities", datamodel) | eval datamodel = if(tag="web", index."."."web", datamodel) | rex field=datamodel "(?<index>[^\\.]+)\.(?<datamodel>.*)" | makemv delim="." datamodel | stats values(index) as index by datamodel |
Search for disabled AD accounts that have been re-enabled
This is a search you can use as an alert or whatever you desire to look for AD accounts that have been disabled in the past 90 days then re-enabled in the past 24h. You can tweak as needed.
1 2 3 4 5 6 7 8 9 10 11 12 |
index=YOURINDEX EventCode IN (4725,4722) earliest=-90d | eval account=mvindex(Account_Name,1) ```separate out the account from the logs and create a field for it``` | stats values(_time) as times, earliest(EventCode) as firstEvent, latest(EventCode) as lastEvent, latest(Account_Name) as lastAccounts, earliest(Account_Name) as firstAccounts by account ```get the stats values of these fields and rename them for further manipulation``` | eval last_action_user=mvindex(lastAccounts,0), first_action_user=mvindex(firstAccounts, 0) ```separate out the accounts that did the disabling & re-enabling and create fields for them``` | replace "4722" with "enabled" in firstEvent, lastEvent | replace "4725" with "disabled" in firstEvent, lastEvent | search account != "*\$" AND firstEvent != "enabled" AND lastEvent != "disabled" | eval enabled_DT=mvindex(times,-1), disabled_DT=mvindex(times, -1-1) ```create fields to show when the affected account was disabled then re-enabled``` | where enabled_DT > relative_time(now(), "-1h@h") ```this determines what range to look for the re-enabling``` | table first_action_user, account, last_action_user, disabled_DT, enabled_DT | rename first_action_user as "Disable Actioning Account", account as "Enabled Account", last_action_user as "Enable Actioning Account", disabled_DT as "DateTime Disabled", enabled_DT as "DateTime Enabled" | convert ctime("DateTime Disabled"), ctime("DateTime Enabled") ```need to convert the time from Unix Epoch to standard time``` |
Query for when PowerShell execution policy is set to Bypass
1 2 3 |
index="windows" sourcetype=WinRegistry key_path="HKLM\\software\\microsoft\\powershell\\1\\shellids\\microsoft.powershell\\executionpolicy" | table _time, host, registry_type, registry_value_data, registry_value_name | rename host as Host, registry_type as Action, registry_value_data as "Registry Value", registry_value_name as "Registry Value Name" |
Reports Owned by Admin Users and Writable by Others
1 2 3 4 5 6 7 8 9 10 11 |
| rest /servicesNS/-/-/saved/searches splunk_server=local | where [|rest /services/authentication/users splunk_server=local | search roles="admin" |fields title | rename title as author] OR author="nobody" | rename title AS savedsearch_name, eai:acl.app as app, eai:acl.perms.write as write_roles | table author write_roles splunk_server app savedsearch_name splunk_server | mvexpand write_roles | where NOT write_roles IN("","admin") | mvcombine write_roles | eval search_name_for_link=savedsearch_name | rex field=search_name_for_link mode=sed "s:%:%25:g s: :%20:g s:<:%3C:g s:>:%3E:g s:#:%23:g s:{:%7B:g s:}:%7D:g s:\|:%7C:g s:\\\:%5C:g s:\^:%5E:g s:~:%7E:g s:\[:%5B:g s:\]:%5D:g s:`:%60:g s:;:%3B:g s:/:%2F:g s:\?:%3F:g s/:/%3A/g s:@:%40:g s:=:%3D:g s:&:%26:g s:\$:%24:g s:\!:%21:g s:\*:%2A:g" | eval link="https://".splunk_server."/en-US/manager/".app."/admin/directory?ns=".app."&pwnr=-&app_only=1&search=".search_name_for_link | fields - search_name_for_link splunk_server |
I got a little fancy there with
1 |
search_name_for_link |
. The link is for clicking from the inline table of the alert email. You can easily skip that.