You need to extract the year and month from the filename to be able to ask for the last Saturday. So, just get the day back and compose it back with the year and month you already extracted: #! /bin/bash filename=SOURCE_FILE_042014.CSV date=${filename##*_} date=${date%.CSV} month=${date:0:2} year=${date:2} day=$(cal $month $year | awk...
The first step is git checkout -b feature-test <sha1 split here> But you also need to reset feature to <sha1 split here>: git checkout feature git reset --hard <sha1 split here> Note that if you already pushed feature, you will need to do a git push --force. And that might...
It's very simple with grep -o '...$': cat /etc/passwd | grep -o '...$' ash /sh /sh /sh ync /sh /sh /sh Or better yer: N=3; grep -o ".\{$N\}$" </etc/passwd ash /sh /sh /sh ync /sh /sh That way you can adjust your N for whatever value you like....
I looked at the source code for reverb and I think it's very easy to adapt it to produce the output you want. If you look at the reverb class CommandLineReverb.java, it has the following two methods: private void extractFromSentReader(ChunkedSentenceReader reader) throws ExtractorException { long start; ChunkedSentenceIterator sentenceIt = reader.iterator();...
As the others already stated in the comments, your "NAs introduced by coercion" is not reproducible. But let me just give you a hint on how to make the code more "scalable" and readable: x <- c(1890, 1899,1900,2001,2012,1999,1943,1944,1950,1988,1981,1988,1997,2014) brk <- seq(1890, 2020, by=10) # breaks cut(x, breaks=brk, right=FALSE, labels=paste(brk[-length(brk)], "s",...
The bottleneck is likely that you spawn several processes for every line of data. As for a replacement, this awk should be equivalent: awk '{ split($0, a, "\""); print $2, $3, a[20] }' TEST.log > IDS.log ...
Through grep, $ grep -oP '(?:\d{1,3}\.){3}\d{1,3}(?=\(mgn\))' file 222.22.2.221 222.22.2.222 222.22.2.223 Through sed, $ sed 's/.*\b\(\([0-9]\{1,3\}\.\)\{3\}[0-9]\{1,3\}\)(mgn).*/\1/g' file 222.22.2.221 222.22.2.222 222.22.2.223 ...
Maybe this script is you want to do. #!/bin/bash while read -r line do file=$(echo $line | cut -d' ' -f1) path=$(echo $line | cut -d' ' -f3) ## If file exists, then move to path [[ -f $file ]] && mv $file $path done < deleted_files.txt By the way,...
python,string,parsing,pattern-matching,cut
import this at the beginning import re now use this line between dictValue and dictMerged new_dict_value = [re.sub(r'\d.*', '', x) for x in dictValue] and then use new_dict_value in the next line...
linux,shell,command-line-interface,cut
You can use sed: sed 's/^[^[:blank:]@]\[email protected][^[:blank:]]\+[[:blank:]]*//' file > file.out some string with no set width another string yet another string!! shortstring gnu sed will work with this: sed 's/^[^\[email protected]]\[email protected]\S\+\s*//' file > file.out ...
The problem is you are using -c to cut. Don't do that. Use the -f and -d flags instead to control the delimiter and fields to output. Or use awk -F . '{print $2}' <<< "$(uname -r)". Or use IFS=. read -r _ minor _rest <<< "$(uname -r)"; echo "$minor"...
Using awk actually gawk awk '{match($0,/PASSWORD=(.*==)/,a); print a[1];}' input.txt Using cut you can try, I'm not sure if it works with your file cut -d"=" -s -f2,3 --output-delimiter="==" input.txt ...
What is happening is that you did not quote $line when reading the file. Then, the original tab-delimited format was lost and instead of tabs, spaces show in between words. And since cut's default delimiter is a TAB, it does not find any and it prints the whole line. So...
Please, do the things properly : for servers in /data/field/*; do string=$(cut -d" " -f3- /data/field/$servers/time) echo "$string" done backticks are deprecated in 2014 in favor of the form $( ) don't parse ls output, use glob instead like I do with data/field/* Check http://mywiki.wooledge.org/BashFAQ for various subjects...
To summarise my comments, I suggest something like this (untested as I have no sample file): NM=$(awk 'NR==1{print NF-2}' file.txt) echo $NM for (( i=1; i <= $NM; i++ )) do echo $i awk '{print $'$i'}' file.txt > tmpgrid_0${i}.dat done ...
You can do the following: grep -P -o 'text=\S+ id=\S+' The -P flag for grep enables the perl regular expression. \S+ will match all non blank space characters, -o outputs only the matched portion. Assuming you need to get the fields "text" and "id" values. Modify the regular expression as...
This question is more about being organized and neat with your data. There are many ways to do this. I would recommend separating out the data you want to bin into its own data.frame. x=dataset[, 50:60] then bin those columns into new columns by making a function with the parameters...
If I understood your question correctly, the following Bash script should do the trick: #!/bin/bash IFS="=" while read k v ; do test -z "$k" && continue # skip empty lines declare $k=$v done <test.txt echo $Name echo $Age echo $Place Why is that working? Most information can be retrieved...
One way with awk: $ awk '{printf "%s:%s:%s:",$2,$4,$6;for(i=7;i<NF;i++)printf "%s-",$i;print $NF}' file 1.2.3.4:xxx:a:Q-W 5.6.7.8:yyy:b:X-Y 9.10.11.12:zzz:c:L-N-X Explanation: The script will run for every line in the file: printf "%s:%s:%s:",$2,$4,$6; # print the 2nd, 4th, 6th field separated by a : for(i=7;i<NF;i++) # from the 7th field to the penultimate field printf "%s-",$i;...
This GNU sed could work sed -n -r '/Denied:/{N; s/^.*name="([^"]*)".*$/\1/; p}' file n is skip printing lines r using extended regular expressions, used for grouping here, to not escape () characters N is reading next line and adding it to pattern space s/input/output/ is substitution ^ is start of line,...
Dealing with floating point numbers is a notoriously messy problem in computer science. Since computer store numbers in base 2 rather than base 10, certain numbers that we commonly use in base 10 simply cannot be expressed succinctly in base 10. I'd recommend doing as much of the work as...
That's probably because it's tab delimited (which is the default delimiter of cut): ~$ du -c foo | grep total | cut -f1 4 ~$ du -c foo | grep total | cut -d' ' -f1 4 to insert a tab, use Ctrl+v, then TAB Alternatively, you could use awk...
javascript,regex,string,trim,cut
You can try: infotext = infotext.replace(/^([\s\S]{10}\S*)[\s\S]*/, "$1"); Problem is your use of [\s\S]*{10} JSFiddle...
You can use the extended flavour, use anchor to the beginning of the line and match every character until if finds a dash, like: grep -oE '^[^-]*' infile It yields: NAME ANOTHER NAME THIRD FOURTH FIFTH NAME ...
echo "$get_ip" | grep -v "entries" | awk '{print $2}' or with cut: echo "$get_ip" | tr -s " " | grep -v "entries" | cut -d " " -f 2 ...
bash,sorting,multiple-columns,cut
The --key=1 option tells sort to use all "fields" from the first through the end of the line to sort the input. As @rici observed first, by default this is a locale-sensitive sort, and in many locales whitespace is ignored for collation purposes. That's what seems to be happening here....
This is pretty much what awk was made for: awk '$1 == "gene_biotype" {print $4, $6}' < input.txt Explanation: $N represents a field, by default separated by whitespace. Any whitespace. The equality check says "Execute the rest of the line only when the first field matches gene_biotype". Then the appropriate...
Use neither. Unless it proves to be too slow, use the csv module, which is far more readable. import csv with open('test.txt','r') as infile: column23 = [ cols[1:3] for cols in csv.reader(infile, delimiter="\t") ] ...
This is because there are multiple spaces and cut can just handle them one by one. You can start from the 5th position: $ cut -d' ' -f 1,5- file ATOM HD13 ILE 206 9.900 15.310 13.450 0.0196 1.4870 ATOM C ILE 206 10.870 16.560 17.500 0.8343 1.9080 ATOM OXT...
With GNU awk (for 3rd arg for match()): $ gawk 'match($0,/id="[^" ]+"/,a){ print $3, a[0] }' file 1234j12342134h id="y_123456" 1234j123421342 id="y_123458" 1234j123421346 id="y_123410" WIth other awks: $ awk 'match($0,/id="[^" ]+"/){ print $3, substr($0,RSTART,RLENGTH) }' file 1234j12342134h id="y_123456" 1234j123421342 id="y_123458" 1234j123421346 id="y_123410" or if you want to strip some of the...
Replace your line grep -i "$last_name" $1 | cut -f 1-7 -d ':' ;; with awk -F: -vnameMatch="$last_name" \ '$1==nameMatch{ printf("LastName:%s\nFirstName:%s\nCity:%s\nState:%s\nClass:%s\nSemester Enrolled:%s\nYear First Enrolled:%s\n\n", \ $1, $2, $3, $4, $5, $6, $7) }' $1 ;; It's pretty much the same idea in ksh. while IFS=: read c1 c2 c3 c4...
First one is like this: awk '/cpu MHz/ {print $4}' < /proc/cpuinfo | awk -F'.' 'NR==1 {print $1}' Considering you have a string like this: cpu MHz : 800.000 cpu MHz : 800.000 cpu MHz : 800.000 cpu MHz : 800.000 And you want the integer part of the number...
sql,postgresql,split,delimiter,cut
you can try doing something based on this: select varcharColumnName, INSTR(varcharColumnName,'-',1,2), case when INSTR(varcharColumnName,'-',1,2) <> 0 THEN SUBSTR(varcharColumnName, 1, INSTR(varcharColumnName,'-',1,2) - 1) else '...' end from tableName; of course, you have to handle "else" the way you want. It works on postgres and oracle (tested), it should work on other...
bash,shell,substring,cut,substrings
Instead of cut, use dirname and basename: input=/path/to/foo dir=$(dirname "$input") file=$(basename "$input") Now $DIR is /path/to and $FILE is foo. dirname will also give you a valid directory for relative paths to the working directory (I mean that $(dirname file.txt) is .). This means, for example, that you can write...
string,batch-file,truncate,cut
Here is a script that performs all the computations in one call to PowerShell: @echo off setlocal :: Get free physical memory, measured in KB for /f "skip=1" %%A in ('wmic os get freephysicalmemory') do for %%B in (%%A) do set free_KB=%%B :: Get total physical memory, measured in B...
I would use awk: $ echo "/dir1/dir2/dir3.../importance/lib1/lib2/lib3/file" | awk -F"/importance/" '{print FS$2}' importance/lib1/lib2/lib3/file Which is the same as: $ awk -F"/importance/" '{print FS$2}' <<< "/dir1/dir2/dir3.../importance/lib1/lib2/lib3/file" importance/lib1/lib2/lib3/file That is, we set the field separator to /importance/, so that the first field is what comes before it and the 2nd one is...
Just say: awk 'END {print $(NF-1), $NF}' "normal" awks store the last line (but not all of them!), so that it is still accessible by the time you reach the END block. Then, it is a matter of printing the penultimate and the last one. This can be done using...
You could do ls -l . | awk '{print $1}', but you should follow the general advice advice to avoid parsing the output of ls. The usual way to avoid parsing the output of ls is to loop over the files to get the information you need. To get the...
You can use the following script to dynamically traverse through your variable, no matter how many fields it has as long as it is only comma separated. variable=abc,def,ghij for i in $(echo $variable | sed "s/,/ /g") do # call your procedure/other scripts here below echo "$i" done Instead of...
Instead of cut -f1 -d: you can use awk: awk -F: '{printf "%s, ", $1} END {print ""}' ...
You can separate $result into the different variables you describe by using read: IFS=: read TITLE AUTHOR PRICE QUANTITY UNIT <<< "$result" Example: $ result="wrinkle in time:myauthor:50.00:20:50" $ IFS=: read TITLE AUTHOR PRICE QUANTITY UNIT <<< "$result" $ echo "$TITLE - by - $AUTHOR" wrinkle in time - by -...
d1 <- cbind(d1, o_all = apply(d1[, -1], 1, function(x) { i <- which.max(!is.na(x) & x > 0) if(x[i] == 0) 0 else i + 4 })) # ID o5 o6 o7 o_all #[1,] 1 1 NA 0 5 #[2,] 2 0 0 0 0 #[3,] 3 2 NA NA 5...
try this : PLACE=$(grep foo flatfile.txt | cut -d '/' -f 1-6 | xargs -I "%" echo %/) ...
You could use grep, $ echo 'Password expires 1-4-2015 15:41:05' | grep -o '\b[0-9]\{1,2\}-[0-9]\{1,2\}-[0-9]\{4\}\b' 1-4-2015 $ echo 'Password expires 20-12-2015 15:41:05' | grep -o '\b[0-9]\{1,2\}-[0-9]\{1,2\}-[0-9]\{4\}\b' 20-12-2015 To grep only the year. $ echo 'Password expires 20-12-2015 15:41:05' | grep -oP '^(?:[^-]*-){2}\K\d{4}\b' 2015 To get only the day $ echo 'Password...
I would have used awk here since you can do all with one command. readelf -S hello | awk '/data|bss/ {print $1,$2,$5,$6}' awk will work with any blank space a separator. One space, multiple space, tabs etc....
awk 'split($3,a,":"){print a[2]?$4:$4-1}' file or if you still want other field awk 'split($3,a,":"){$4=a[2]?$4:$4-1}1' file or even awk 'split($3,a,":")&&!a[2]{$4--}1' file ANOTHER Only $4 awk '{$0=$4-($3~/:0$/)}1' file Line awk '{$4-=($3~/:0$/)}1' file ...
file,batch-file,token,cut,between
Like this : @echo off for /f "tokens=3,4 Delims=() " %%a in (test.log) do ( set "$Number=%%a" set "$Unit=%%b" ) echo Number : [%$Number%] echo Unit : [%$Unit%] I Used here a file named test.log for the test...
Use this script. cat merchent.csv | cut -d "," -f2 | while read; do IFS='. ' read -ra ADDR <<< "$REPLY" for i in "${ADDR[@]}"; do # process "$i" echo -ne "$i " done echo " " done Here I'm guessing your data is like business_name1,contact_name1(in the format Mr. John...
grep Host: $FILE | tail -1 | grep -Po '.*Host: \K.*\)' The interesting part is the last grep: -P using perl regex -o output only matched part \K similar as look behind, but supports dynamic length .*\) match the part you need ...
regex,string,bash,substring,cut
with bash variables, you can do this: DOMAINNAME=abcdef CHAR2=${DOMAINNAME:1:1} CHAR4=${DOMAINNAME:3:1} echo "char2=$CHAR2, char4=$CHAR4" gives: char2=b, char4=d explanation the meaning of this: ${DOMAINNAME:3:1}means: take substring starting from the character at pos 3 (0-based, so the 4th character), and length = 1 character....
You don't say what your desired output is but this shows you the right approach: $ cat tst.awk NR==1 { print while ( match($0,/[^[:space:]]+[[:space:]]*/) ) { width[++i] = RLENGTH $0 = substr($0,RSTART+RLENGTH) } next } { i = 0 while ( (fld = substr($0,1,width[++i])) != "" ) { gsub(/^ +|...
string,awk,substring,extract,cut
I would use sed: $ echo "asdfasd_d20150616asdasd" | sed -r 's/^.*_d(.{8}).*$/\1/' 20150616 This gets a string and removes everything up to _d. Then, catches the following 8 characters and prints them back. sed -r is used to be able to catch groups with just () instead of \(\). ^.*_d(.{8}).*$ ^...
You need to make a proper use of lookaround feature, your lookbehind is fine but lookahead is not. Try this: grep -Po "(?<=<cite>).*?(?=</cite>)" Ex: echo '<cite>www.site.com/sdds/ass</cite>A-"><div Class="sa_mc"><div class="sb_tlst"><h3><a href=' | grep -Po "(?<=<cite>).*?(?=</cite>)" www.site.com/sdds/ass Disclaimer: It's a bad practice to parse XML/HTML with regex. You should probably use a parser...
Remove all zeros Use grep -v: -v, --invert-match Selected lines are those not matching any of the specified patterns. Command: grep -v -e "^0$" file The problem with this is that it will remove lines all '0' lines. Remove even lines awk 'NR % 2 != 0' file In this...