How to get logs of a specific time range on Linux?

The logs I am processing is Hadoop log (log4j). It is in format like:

2014-09-20 21:55:11,855 INFO org.apache.hadoop.nfs.nfs3.IdUserGroup: Updated user map size: 36
2014-09-20 21:55:11,863 INFO org.apache.hadoop.nfs.nfs3.IdUserGroup: Updated group map size: 55
2014-09-20 22:10:11,907 INFO org.apache.hadoop.nfs.nfs3.IdUserGroup: Update cache now
2014-09-20 22:10:11,907 INFO org.apache.hadoop.nfs.nfs3.IdUserGroup: Not doing static UID/GID mapping because '/etc/nfs.map' does not exist.

Now, I want to get all the logs with a specific time range, e.g. last 4 hours. How to achieve this?

It should be with command line tools since it is in an automatic routine which is invoked by crond every 4 hours.

You can use date to generate filtering rules to filter out the logs in a specific range:

# grep out latest log
echo "" >$tmplog
for ((i=4; i>=1; i--)); do
    grep "^$(date -d -${i}hour +'%Y-%m-%d %H')" $log >> $tmplog
done

Similar Posts

  • How to get file size in Python?

    How to get a file’s size in Python from the full path of a file? You can use this piece of Python code: os.path.getsize(file_path) It returns the size in bytes. It raises os.error if the file does not exist or is inaccessible. Manual of os.path.getsize(): https://docs.python.org/2/library/os.path.html#os.path.getsize Read more: How to get a FILE pointer from…

  • HTML form generation from the database and store value into the database

    I have a “t_form” table and a “t_attribute” table. It looks a little bit like below. Form table form_id | form_name | description ———————————– 1 | Laptop | Possible attributes are Model, Screen Size, OS, Cd Player 2 | Mobile | Possible attributes are Model, OS 3 | Tablet | Possible attributes are Model, Screen…

  • How to configure iptables on Linux Mint 17.1?

    How to configure iptables and make the configuration persistent across system restarting on Linux Mint 17.1? You can use the ‘iptables-persistent’ tool. To install iptables-persistency pachage: sudo aptitude install iptables-persistent The you can manipulate the iptables by the ‘iptables’ command. To save the current iptables rules: sudo /etc/init.d/iptables-persistent store It will store the rules for…

  • | |

    x-data-plane feature in QEMU/KVM

    Abstract In systems, sometimes, we use one global lock to keep synchronization among different threads. This principle also happens in QEMU/KVM (http://wiki.qemu.org/Main_Page) system. However, this may cause lock contention problem. The performance/scalability of whole system will be decreased. In order to solve this problem in QEMU/KVM, x-data-plane feature is designed/implemented, which the high-level idea is…

  • How to make a swap partition

    How to make a swap partition on Linux? First, make a new partition (or reuse an existing one if you like). I suggest using cfdisk to create it: https://www.systutorials.com/docs/linux/man/8-cfdisk/ Then, turn the new partition (say, /dev/sdc1) to a swap # mkswap /dev/sdc1 Lastly, turn it on # swapon /dev/sdc1 You can check whether its status…

  • How to force a checkpointing of metadata in HDFS?

    HDFS SecondaraNameNode log shows 2017-08-06 10:54:14,488 ERROR org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode: Exception in doCheckpoint java.io.IOException: Inconsistent checkpoint fields. LV = -63 namespaceID = 1920275013 cTime = 0 ; clusterId = CID-f38880ba-3415-4277-8abf-b5c2848b7a63 ; blockpoolId = BP-578888813-10.6.1.2-1497278556180. Expecting respectively: -63; 263120692; 0; CID-d22222fd-e28a-4b2d-bd2a-f60e1f0ad1b1; BP-622207878-10.6.1.2-1497242227638. at org.apache.hadoop.hdfs.server.namenode.CheckpointSignature.validateStorageInfo(CheckpointSignature.java:134) at org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.doCheckpoint(SecondaryNameNode.java:531) at org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.doWork(SecondaryNameNode.java:395) at org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode$1.run(SecondaryNameNode.java:361) at org.apache.hadoop.security.SecurityUtil.doAsLoginUserOrFatal(SecurityUtil.java:415) at org.apache.hadoop.hdfs.server.namenode.SecondaryNameNode.run(SecondaryNameNode.java:357) It seems the checkpoint…

Leave a Reply

Your email address will not be published. Required fields are marked *