Installing Filebeat on Ubuntu
Before configuring log collection, you need to install Filebeat. Run the following commands to add Elastic’s GPG key, set the repository, and install the latest version (adjust the version in the URL if needed):
sudo apt update && sudo apt upgrade -y
wget -qO - https://artifacts.elastic.co/GPG-KEY-elasticsearch | sudo apt-key add -
echo "deb https://artifacts.elastic.co/packages/7.x/apt stable main" | sudo tee -a /etc/apt/sources.list.d/elastic-7.x.list
sudo apt update && sudo apt install filebeat -y
This ensures Filebeat is installed and ready for configuration.
Configuring Log Inputs
The core of Filebeat’s configuration is defining input sources in /etc/filebeat/filebeat.yml. The filebeat.inputs section specifies which logs to collect. For basic system logs (e.g., syslog, auth logs), use:
filebeat.inputs:
- type: log
enabled: true
paths:
- /var/log/syslog
- /var/log/auth.log
For custom application logs (e.g., Nginx, Apache), add their directories or file patterns:
- type: log
enabled: true
paths:
- /var/log/nginx/*.log
- /var/log/apache2/*.log
Optional parameters like ignore_older (to skip old logs) or exclude_files (to filter unwanted files) can be added:
- type: log
enabled: true
paths:
- /var/log/myapp/*.log
ignore_older: 72h # Ignore logs older than 72 hours
exclude_files: ['\.gz$'] # Exclude gzipped files
These settings ensure Filebeat monitors the correct logs efficiently.
Setting Up Output Targets
Filebeat sends collected logs to a centralized destination (e.g., Elasticsearch, Logstash). Below are common configurations:
-
Elasticsearch (Direct): Send logs to a local or remote Elasticsearch instance. Replace
localhostwith your Elasticsearch server’s IP if remote.output.elasticsearch: hosts: ["localhost:9200"] index: "filebeat-%{[agent.version]}-%{+yyyy.MM.dd}" # Dynamic index name with version and date -
Logstash (For Processing): Send logs to Logstash for advanced parsing/filtering before storing in Elasticsearch.
output.logstash: hosts: ["localhost:5044"]
Adjust the hosts parameter based on your infrastructure (e.g., ["es-node1:9200", "es-node2:9200"] for a cluster).
Starting and Enabling Filebeat
After saving the configuration (sudo nano /etc/filebeat/filebeat.yml), start the Filebeat service and enable it to run on boot:
sudo systemctl start filebeat
sudo systemctl enable filebeat
Use these commands to check the service status and troubleshoot issues:
sudo systemctl status filebeat # Check if running
sudo tail -f /var/log/filebeat/filebeat # View real-time logs
A green “active (running)” status confirms Filebeat is operational.
Verifying Log Collection
To ensure Filebeat is sending logs correctly:
- Check Elasticsearch Indices: Run
curl -X GET "localhost:9200/_cat/indices?v"(replacelocalhostif needed). Look for indices starting withfilebeat-(e.g.,filebeat-7.14.0-2025.10.12). - View Filebeat Logs: Use
sudo tail -f /var/log/filebeat/filebeatto check for errors (e.g., “failed to connect to Elasticsearch”) or successful events (e.g., “publishing 100 events”). - Test with Sample Logs: Create a test log file (e.g.,
/var/log/test.log) and add content (echo "Test log entry" >> /var/log/test.log). Update Filebeat’spathsto include this file and restart the service. Verify the index receives the new entries.
Advanced Tips for Production
-
Use Filebeat Modules: Simplify log collection for common applications (e.g., system, Nginx, MySQL) by enabling modules. For example, to enable the system module:
sudo filebeat modules enable system sudo systemctl restart filebeatModules automatically configure inputs, processors, and dashboards for specific log types.
-
Secure Data Transmission: Encrypt data between Filebeat and Elasticsearch using TLS/SSL. Generate certificates (e.g., with
elasticsearch-certutil) and configure Filebeat:output.elasticsearch: hosts: ["https://your-elasticsearch-host:9200"] ssl.verification_mode: certificate ssl.certificate_authorities: ["/etc/filebeat/certs/ca.crt"] ssl.certificate: "/etc/filebeat/certs/client.crt" ssl.key: "/etc/filebeat/certs/client.key"This prevents eavesdropping on sensitive log data.
-
Optimize Performance: Adjust parameters to handle large log volumes efficiently. Key settings include:
harvester.buffer.size: Increase buffer size for large files (default: 16384).flush.min.events: Send logs in batches (default: 1024) to reduce network overhead.scan.frequency: Control how often Filebeat checks for new log lines (default: 10s; increase for less frequent updates).
Example:
filebeat.inputs: - type: log enabled: true paths: - /var/log/myapp/*.log harvester.buffer.size: 32768 flush.min.events: 2048 scan.frequency: 15sThese optimizations improve throughput and reduce resource usage.
以上就是关于“ubuntu filebeat日志收集方法”的相关介绍,筋斗云是国内较早的云主机应用的服务商,拥有10余年行业经验,提供丰富的云服务器、租用服务器等相关产品服务。云服务器资源弹性伸缩,主机vCPU、内存性能强悍、超高I/O速度、故障秒级恢复;电子化备案,提交快速,专业团队7×24小时服务支持!
简单好用、高性价比云服务器租用链接:https://www.jindouyun.cn/product/cvm