DuckCorp Projects: Issues
https://projects.duckcorp.org/
https://projects.duckcorp.org/favicon.ico?1669909042
2022-07-10T10:42:55Z
DuckCorp Projects
Redmine
DuckCorp Infrastructure - Bug #776 (Resolved): Users are unable to register to projects.duckcorp.org
https://projects.duckcorp.org/issues/776
2022-07-10T10:42:55Z
Pierre-Louis Bonicoli
pierre-louis.bonicoli@ir5.eu
<p>There is an issue related to the captcha:<br /><pre>
Oops, we failed to validate your reCAPTCHA response. Please try again.
</pre><br />I tried with firefox and chromium.</p>
<p><code>/var/log/redmine/dc/production.log</code> from the <code>redmine</code> LXC container:<br /><pre>
Started POST "/account/register" for 185.238.6.46 at 2022-07-10 12:53:52 +0000
Processing by AccountController#register as HTML
Parameters: {"utf8"=>"✓", "authenticity_token"=>"[REDACTED]", "user"=>{"login"=>"pilou_test", "password"=>"[FILTERED]", "password_confirmation"=>"[FILTERED]", "firstname"=>"pilou", "lastname"=>"pilou_test", "mail"=>"pilou_test@ir5.eu", "language"=>"fr"}, "g-recaptcha-response"=>"[REDACTED]", "commit"=>"Soumettre"}
Current user: anonymous
Rendering plugins/recaptcha/app/views/account/register.html.erb within layouts/base
Rendered plugins/recaptcha/app/views/account/register.html.erb within layouts/base (8.8ms)
Completed 200 OK in 3022ms (Views: 14.7ms | ActiveRecord: 1.4ms)
</pre></p>
DuckCorp Infrastructure - Bug #775 (Resolved): Ninjabot doesn't handle unreachable network
https://projects.duckcorp.org/issues/775
2022-07-10T09:29:08Z
Pierre-Louis Bonicoli
pierre-louis.bonicoli@ir5.eu
<p>Ninjabot was unable to reconnect after encountering a temporarily unreachable network:<br /><pre>
Jul 07 00:40:11 orthos.duckcorp.org ninjabot[1608725]: <= {} None PING ['irc2.duckcorp.org']
Jul 07 00:41:31 orthos.duckcorp.org ninjabot[1608725]: [126B blob data]
Jul 07 00:42:08 orthos.duckcorp.org ninjabot[1608725]: [132B blob data]
Jul 07 00:46:31 orthos.duckcorp.org ninjabot[1608725]: [129B blob data]
Jul 07 00:58:02 orthos.duckcorp.org ninjabot[1608725]: Traceback (most recent call last):
Jul 07 00:58:02 orthos.duckcorp.org ninjabot[1608725]: File "/opt/ninjabot/venv/bin/ninjabot", line 8, in <module>
Jul 07 00:58:02 orthos.duckcorp.org ninjabot[1608725]: sys.exit(ninjabot.cli())
Jul 07 00:58:02 orthos.duckcorp.org ninjabot[1608725]: File "/opt/ninjabot/venv/lib/python3.9/site-packages/ninjabot/ninjabot.py", line 38, in cli
Jul 07 00:58:02 orthos.duckcorp.org ninjabot[1608725]: client.start()
Jul 07 00:58:02 orthos.duckcorp.org ninjabot[1608725]: File "/opt/ninjabot/venv/lib/python3.9/site-packages/py_irc/irc.py", line 99, in start
Jul 07 00:58:02 orthos.duckcorp.org ninjabot[1608725]: buf = self.socket.recv(4096)
Jul 07 00:58:02 orthos.duckcorp.org ninjabot[1608725]: File "/usr/lib/python3.9/ssl.py", line 1226, in recv
Jul 07 00:58:02 orthos.duckcorp.org ninjabot[1608725]: return self.read(buflen)
Jul 07 00:58:02 orthos.duckcorp.org ninjabot[1608725]: File "/usr/lib/python3.9/ssl.py", line 1101, in read
Jul 07 00:58:02 orthos.duckcorp.org ninjabot[1608725]: return self._sslobj.read(len)
Jul 07 00:58:02 orthos.duckcorp.org ninjabot[1608725]: OSError: [Errno 101] Network is unreachable
Jul 07 04:56:32 orthos.duckcorp.org ninjabot[1608725]: [127B blob data]
Jul 07 04:56:32 orthos.duckcorp.org ninjabot[1608725]: Connection broke up
Jul 07 04:56:32 orthos.duckcorp.org ninjabot[1608725]: Attemting to connect to irc.milkypond.org
Jul 07 04:56:32 orthos.duckcorp.org ninjabot[1608725]: Connected to irc.milkypond.org
</pre><br />The bot wasn't connected at <code>04:56:32</code>, a manual restart of the service was required.</p>
DuckCorp Infrastructure - Bug #769 (Rejected): Toushirou get stuck randomly at boot
https://projects.duckcorp.org/issues/769
2022-04-15T23:36:48Z
Pierre-Louis Bonicoli
pierre-louis.bonicoli@ir5.eu
<p>Toushirou get stuck randomly at boot.</p>
Another reboot party needs to be planned in order to assess this issue:
<ul>
<li><a href="https://www.askapache.com/linux/linux-debugging/" class="external">kernel parameters</a>: <code>debug ignore_loglevel log_buf_len=10M print_fatal_signals=1 LOGLEVEL=8 earlyprintk=vga,keep sched_debug console=ttyS0,115200 systemd.log_level=debug</code></li>
<li><a href="https://www.suse.com/support/kb/doc/?id=000019461" class="external">step by step systemd boot process</a></li>
<li><a class="external" href="https://wiki.debian.org/systemd#systemd_hangs_on_startup_or_shutdown">https://wiki.debian.org/systemd#systemd_hangs_on_startup_or_shutdown</a></li>
</ul>
<p>Pictures:<br /><img src="https://projects.duckcorp.org/attachments/download/167/2022-04-13-185627_001.jpeg" loading="lazy" style="width: 50%;" alt="" /><br /><img src="https://projects.duckcorp.org/attachments/download/168/2022-04-13-185651_001.jpeg" loading="lazy" style="width: 50%;" alt="" /></p>
DuckCorp Infrastructure - Bug #766 (Resolved): Orfeo postman[1643199]: /usr/lib/ruby/vendor_ruby/...
https://projects.duckcorp.org/issues/766
2022-03-27T19:32:48Z
Pierre-Louis Bonicoli
pierre-louis.bonicoli@ir5.eu
<pre>
Mar 27 23:29:04 Orfeo postman[1643199]: /usr/lib/ruby/vendor_ruby/rubygems/core_ext/kernel_require.rb:85:in `require': cannot load such file -- active_ldap (LoadError)
Mar 27 23:29:04 Orfeo postman[1643199]: from /usr/lib/ruby/vendor_ruby/rubygems/core_ext/kernel_require.rb:85:in `require'
Mar 27 23:29:04 Orfeo postman[1643199]: from /opt/cyborghood/lib/cyborghood/objects/ldap.rb:24:in `<top (required)>'
Mar 27 23:29:04 Orfeo postman[1643199]: from /usr/lib/ruby/vendor_ruby/rubygems/core_ext/kernel_require.rb:85:in `require'
Mar 27 23:29:04 Orfeo postman[1643199]: from /usr/lib/ruby/vendor_ruby/rubygems/core_ext/kernel_require.rb:85:in `require'
Mar 27 23:29:04 Orfeo postman[1643199]: from /opt/cyborghood/lib/cyborghood/objects.rb:20:in `<top (required)>'
Mar 27 23:29:04 Orfeo postman[1643199]: from /usr/lib/ruby/vendor_ruby/rubygems/core_ext/kernel_require.rb:85:in `require'
Mar 27 23:29:04 Orfeo postman[1643199]: from /usr/lib/ruby/vendor_ruby/rubygems/core_ext/kernel_require.rb:85:in `require'
Mar 27 23:29:04 Orfeo postman[1643199]: from /opt/cyborghood/lib/cyborghood/mail.rb:22:in `<top (required)>'
Mar 27 23:29:04 Orfeo postman[1643199]: from /usr/lib/ruby/vendor_ruby/rubygems/core_ext/kernel_require.rb:85:in `require'
Mar 27 23:29:04 Orfeo postman[1643199]: from /usr/lib/ruby/vendor_ruby/rubygems/core_ext/kernel_require.rb:85:in `require'
Mar 27 23:29:04 Orfeo postman[1643199]: from /opt/cyborghood/bin/postman:30:in `<main>'
Mar 27 23:29:04 Orfeo systemd[1]: cyborghood_postman.service: Main process exited, code=exited, status=1/FAILURE
</pre>
DuckCorp Infrastructure - Bug #746 (Rejected): unexpected restart of Toushirou host
https://projects.duckcorp.org/issues/746
2021-12-13T14:16:57Z
Pierre-Louis Bonicoli
pierre-louis.bonicoli@ir5.eu
<p>Today Toushirou was restarted unexpectedly. It seems that this restart wasn't due a command.</p>
<p>The server was restarted after <code>Dec 13 10:07:03</code> (UTC+1). I unlocked the encrypted encryption around 13h15 (UTC+1).</p>
<p><code>syslog</code> contains:<br /><pre>
Dec 13 10:06:52 Toushirou postfix/smtpd[1353160]: disconnect from <redacted> ehlo=2 starttls=1 mail=1 rcpt=1 bdat=1 quit=1 commands=7
Dec 13 10:07:03 Toushirou stunnel: LOG5[8632]: Connection closed: 182 byte(s) sent to TLS, 20 byte(s) sent to socket
@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@
[...]
@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@
Dec 13 13:18:38 Toushirou systemd-udevd[631]: Using default interface naming scheme 'v247'.
Dec 13 13:18:38 Toushirou systemd-udevd[630]: Using default interface naming scheme 'v247'.
Dec 13 13:18:38 Toushirou lvm[578]: 3 logical volume(s) in volume group "extra" monitored
</pre></p>
<p>The filesystem journals were recovered:<br /><pre>
Dec 13 13:18:38 Toushirou systemd-fsck[791]: /dev/md0 was not cleanly unmounted, check forced.
Dec 13 13:18:38 Toushirou systemd-fsck[790]: /dev/mapper/main-ldap: recovering journal
Dec 13 13:18:38 Toushirou systemd-fsck[790]: /dev/mapper/main-ldap: clean, 14/23616 files, 9468/94208 blocks
Dec 13 13:18:38 Toushirou systemd[1]: Finished File System Check on /dev/mapper/main-ldap.
Dec 13 13:18:38 Toushirou systemd-fsck[787]: /dev/mapper/main-ftp: recovering journal
Dec 13 13:18:38 Toushirou systemd-fsck[787]: /dev/mapper/main-ftp: clean, 1042/1966080 files, 4094072/7864320 blocks
Dec 13 13:18:38 Toushirou systemd[1]: Finished File System Check on /dev/mapper/main-ftp.
Dec 13 13:18:38 Toushirou systemd-fsck[794]: /dev/mapper/main-logs: recovering journal
Dec 13 13:18:38 Toushirou systemd-fsck[794]: /dev/mapper/main-logs: Clearing orphaned inode 524490 (uid=0, gid=4, mode=0100640, size=186)
Dec 13 13:18:38 Toushirou systemd-fsck[794]: /dev/mapper/main-logs: Clearing orphaned inode 525136 (uid=0, gid=4, mode=0100640, size=2261619)
[...]
Dec 13 13:18:38 Toushirou systemd-fsck[794]: /dev/mapper/main-logs: clean, 3025/915712 files, 701679/3661824 blocks
Dec 13 13:18:38 Toushirou systemd[1]: Finished File System Check on /dev/mapper/main-logs.
Dec 13 13:18:38 Toushirou systemd-fsck[797]: /dev/mapper/main-mysql: recovering journal
Dec 13 13:18:38 Toushirou systemd-fsck[797]: /dev/mapper/main-mysql: clean, 1706/305216 files, 302945/1220608 blocks
Dec 13 13:18:38 Toushirou systemd[1]: Finished File System Check on /dev/mapper/main-mysql.
Dec 13 13:18:38 Toushirou systemd-fsck[801]: /dev/mapper/main-projects: recovering journal
Dec 13 13:18:38 Toushirou systemd-fsck[801]: /dev/mapper/main-projects: clean, 15384/977280 files, 2501362/3932160 blocks
Dec 13 13:18:38 Toushirou systemd[1]: Finished File System Check on /dev/mapper/main-projects.
Dec 13 13:18:38 Toushirou systemd-fsck[805]: /dev/mapper/main-stuffcloud: recovering journal
Dec 13 13:18:38 Toushirou systemd-fsck[805]: /dev/mapper/main-stuffcloud: clean, 184647/8519680 files, 22560629/34078720 blocks
Dec 13 13:18:38 Toushirou systemd[1]: Finished File System Check on /dev/mapper/main-stuffcloud.
Dec 13 13:18:38 Toushirou systemd-fsck[810]: /dev/mapper/main-var: recovering journal
Dec 13 13:18:38 Toushirou systemd-fsck[810]: /dev/mapper/main-var: Clearing orphaned inode 136445 (uid=0, gid=0, mode=0100664, size=11567160)
Dec 13 13:18:38 Toushirou systemd-fsck[810]: /dev/mapper/main-var: Clearing orphaned inode 136045 (uid=0, gid=0, mode=0100664, size=9253600)
[...]
Dec 13 13:18:38 Toushirou systemd-fsck[810]: /dev/mapper/main-var: clean, 43941/305216 files, 677459/1220608 blocks
Dec 13 13:18:38 Toushirou systemd[1]: Finished File System Check on /dev/mapper/main-var.
Dec 13 13:18:38 Toushirou systemd-fsck[811]: /dev/mapper/main-tmp: recovering journal
Dec 13 13:18:38 Toushirou systemd-fsck[811]: /dev/mapper/main-tmp: Clearing orphaned inode 20 (uid=0, gid=0, mode=0100666, size=0)
Dec 13 13:18:38 Toushirou systemd-fsck[811]: /dev/mapper/main-tmp: Clearing orphaned inode 50 (uid=128, gid=136, mode=0100600, size=0)
[...]
Dec 13 13:18:38 Toushirou systemd-fsck[811]: /dev/mapper/main-tmp: clean, 3380/121920 files, 20791/487424 blocks
Dec 13 13:18:38 Toushirou systemd[1]: Finished File System Check on /dev/mapper/main-tmp.
Dec 13 13:18:38 Toushirou systemd-fsck[814]: /dev/mapper/main-vcs: recovering journal
Dec 13 13:18:38 Toushirou systemd-fsck[814]: /dev/mapper/main-vcs: clean, 62639/183264 files, 334140/732160 blocks
Dec 13 13:18:38 Toushirou systemd[1]: Finished File System Check on /dev/mapper/main-vcs.
Dec 13 13:18:38 Toushirou systemd-fsck[817]: /dev/mapper/main-vmail: recovering journal
Dec 13 13:18:38 Toushirou systemd-fsck[817]: /dev/mapper/main-vmail: Clearing orphaned inode 1314229 (uid=5111, gid=5111, mode=0100600, size=2543956)
[...]
Dec 13 13:18:38 Toushirou systemd-fsck[817]: /dev/mapper/main-vmail: clean, 38189/1966080 files, 3862291/7864320 blocks
Dec 13 13:18:38 Toushirou systemd[1]: Finished File System Check on /dev/mapper/main-vmail.
Dec 13 13:18:38 Toushirou systemd[1]: Finished File System Check on /dev/mapper/extra-lxd.
Dec 13 13:18:38 Toushirou systemd-fsck[827]: /dev/mapper/extra-home: recovering journal
Dec 13 13:18:38 Toushirou systemd-fsck[827]: /dev/mapper/extra-home: clean, 576437/19660800 files, 60022856/78643200 blocks
Dec 13 13:18:38 Toushirou systemd[1]: Finished File System Check on /dev/mapper/extra-home.
Dec 13 13:18:38 Toushirou systemd-fsck[791]: /dev/md0: 348/64000 files (23.9% non-contiguous), 63264/255936 blocks
Dec 13 13:18:38 Toushirou systemd-fsck[819]: /dev/mapper/main-www: recovering journal
Dec 13 13:18:38 Toushirou systemd-fsck[819]: /dev/mapper/main-www: clean, 417149/9175040 files, 7579187/36700160 blocks
Dec 13 13:18:38 Toushirou systemd[1]: Finished File System Check on /dev/mapper/main-www.
</pre></p>
<p>Thanks to GuiHome and Victor for letting me know that the NextCloud service was unavailable.</p>
<p>Once the server has been restarted there was an error with the hivane network link. Hence some service were unavailable. The nerim link worked. <br /><pre>
root@Toushirou:~# systemctl --failed
UNIT LOAD ACTIVE SUB DESCRIPTION
● apache2.service loaded failed failed The Apache HTTP Server
● ifup@eth\x2dwan\x2dhivane.service loaded failed failed ifup for eth-wan-hivane
● matrix-appservice-irc.service loaded failed failed Matrix AppService IRC
● networking.service loaded failed failed Raise network interfaces
</pre></p>
<pre>
root@Toushirou:~# ifdown --force eth-wan-hivane
RTNETLINK answers: Cannot assign requested address
RTNETLINK answers: Cannot assign requested address
root@Toushirou:~# ifup --force eth-wan-hivane
Waiting for DAD... Timed out
ifup: failed to bring up eth-wan-hivane
</pre>
<p>I remember the timed out issue occurred when the last time the server was moved from a rack to another. I tried the <code>ifdown</code>/<code>ifup</code> commands several times (until the <code>Timed out</code> disappeared).</p>
<p>The logs show that the timed out issue occurred at boot:<br /><pre>
Dec 13 13:18:45 Toushirou sh[1562]: Waiting for DAD... Timed out
Dec 13 13:18:45 Toushirou sh[1496]: ifup: failed to bring up eth-wan-hivane
</pre></p>
<p>Next I restarted <code>apache2.service</code> and <code>matrix-appservice-irc.service</code>, then I updated <code>/lib/systemd/system/lxd.socket</code> in order to fix a typo:<br /><pre>Dec 13 15:48:22 Toushirou systemd[1]: /lib/systemd/system/lxd.socket:8: Unit must be of type service, ignoring: lxd.servcie
</pre><br />After that i ran <code>systemctl daemon-reload</code> and <code>lxc list</code> then the redmine LXC container restarted.</p>
<p>At this time I tried to create this issue using redmine:https://projects.duckcorp.org/ but an issue occurred after i tried to authenticate: the redmine web interface showed an error: <code>"Cannot assign requested address - connect(2) for [2001:67c:1740:9001::c1c8:2ab1]:636"</code>.</p>
<p>The restart of the <code>slapd</code> service (which was listening on IPv6 but not IPv4) fixed this issue.</p>
DuckCorp Infrastructure - Bug #726 (Resolved): /etc/stunnel/certs/duckcorp_stunnel_redis_Orfeo.pe...
https://projects.duckcorp.org/issues/726
2021-07-08T22:43:06Z
Pierre-Louis Bonicoli
pierre-louis.bonicoli@ir5.eu
<p>On Orfeo: <code>/etc/stunnel/certs/duckcorp_stunnel_redis_Orfeo.pem</code> certificate is expired.</p>
DuckCorp Infrastructure - Enhancement #719 (Rejected): redmine role depends on unversioned patches
https://projects.duckcorp.org/issues/719
2021-02-12T00:13:40Z
Pierre-Louis Bonicoli
pierre-louis.bonicoli@ir5.eu
<p><a href="https://projects.duckcorp.org/projects/dc-admin/repository/ansible-role-redmine/revisions/master/entry/tasks/plugins.yml#L34" class="external">The plugin patches aren't versioned</a> , they are stored on the filesystem where redmine is installed.</p>
<p>The patches should be moved in the repository where the Ansible inventory is located.</p>
DuckCorp Infrastructure - Bug #717 (Resolved): toushirou: new body
https://projects.duckcorp.org/issues/717
2021-02-11T09:39:15Z
Pierre-Louis Bonicoli
pierre-louis.bonicoli@ir5.eu
<p>Toushirou has been successfully moved from PA2 to PA3.</p>
The following files have been manually edited:
<ul>
<li><code>/etc/systemd/network/10_eth-wan-nerim.link</code>: update MAC address</li>
<li><code>/etc/systemd/network/10_eth-wan-hivane.link</code>: update MAC address</li>
<li><code>/etc/network/interfaces.d/hivane-link</code>: use <code>post-up</code> instead of <code>up</code> in order to workaround this time out:<br /><img src="https://projects.duckcorp.org/attachments/download/98/IMG_20210210_125116_small.jpg" title="Timed out hivane link" alt="Timed out hivane link" loading="lazy" /></li>
</ul>
<p>If needed, Ansible configuration must be updated accordingly.</p>
DuckCorp Infrastructure - Review #681 (Resolved): Undefined attribute: mda_usergroup
https://projects.duckcorp.org/issues/681
2019-10-09T10:18:46Z
Pierre-Louis Bonicoli
pierre-louis.bonicoli@ir5.eu
<p>Fix the following error:</p>
<pre>
$ ansible-playbook playbooks/tenants/duckcorp/security.yml -u root
TASK [dc-antivirus : ClamAV Setup -- Connection Type] ***********************************************************************************************************************
fatal: [Orfeo]: FAILED! => {"msg": "The task includes an option with an undefined variable. The error was: 'dict object' has no attribute 'mda_usergroup'\n\nThe error appears to be in '/srv/share/src/duckcorp/duckcorp-infra.git/ansible/roles/dc-antivirus/tasks/main.yml': line 21, column 3, but may\nbe elsewhere in the file depending on the exact syntax problem.\n\nThe offending line appears to be:\n\n notify: Reconfigure ClamAV\n- name: ClamAV Setup -- Connection Type\n ^ here\n"}
</pre>
DuckCorp Infrastructure - Enhancement #615 (Rejected): new Toushirou: configuration migration
https://projects.duckcorp.org/issues/615
2018-04-23T14:41:26Z
Pierre-Louis Bonicoli
pierre-louis.bonicoli@ir5.eu
<p>This issue regroups tasks related to Toushirou setup.</p>
DuckCorp Infrastructure - Bug #608 (Resolved): debsecan mail configuration problem (Thorfinn, Jin...
https://projects.duckcorp.org/issues/608
2017-10-30T23:20:13Z
Pierre-Louis Bonicoli
pierre-louis.bonicoli@ir5.eu
<p>After <a class="issue tracker-2 status-3 priority-4 priority-default closed" title="Enhancement: Reboot Hivane hosted VMs in order to apply configuration updates (Resolved)" href="https://projects.duckcorp.org/issues/607">#607</a>, I checked logs on <code>Thorfinn</code> and <code>Jinta</code> and discovered that Postfix queue wasn't empty:<br /><pre>
postqueue -p
[...]
68ED2105 85288 Mon Oct 30 01:21:08 daemon@Thorfinn.duckcorp.org
(delivery temporarily suspended: connect to Thorfinn.duckcorp.org[193.200.43.26]:25: Connection refused)
root@Thorfinn.duckcorp.org
[...]
-- 424 Kbytes in 9 Requests.
</pre></p>
<p><code>debsecan</code> is executed in a cron (<code>/etc/cron.d/debsecan</code>), output is send to <code>root</code> due to <code>MAILTO</code> value in <code>/etc/default/debsecan</code> and mails are not sent:<br /><pre>
Oct 31 00:00:27 Thorfinn postfix/smtp[2699]: connect to Thorfinn.duckcorp.org[193.200.43.26]:25: Connection refused
Oct 31 00:00:27 Thorfinn postfix/smtp[2699]: connect to Thorfinn.duckcorp.org[2001:67c:1740:9005::26]:25: Connection refused
Oct 31 00:00:27 Thorfinn postfix/smtp[2698]: connect to Thorfinn.duckcorp.org[193.200.43.26]:25: Connection refused
Oct 31 00:00:27 Thorfinn postfix/smtp[2698]: connect to Thorfinn.duckcorp.org[2001:67c:1740:9005::26]:25: Connection refused
</pre></p>
<p>There is the same configuration problem on Jinta.</p>
<p>On Toushirou it seems there is another problem:</p>
<pre>
Oct 30 02:59:07 toushirou postfix/qmgr[2940]: 3yQHhq1LrRz15SL: from=<daemon@toushirou.duckcorp.org>, size=10467, nrcpt=1 (queue active)
Oct 30 02:59:07 toushirou postfix/smtp[11820]: 3yQHhq1LrRz15SL: to=<root@toushirou.duckcorp.org>, orig_to=<root>, relay=none, delay=0.18, delays=0.16/0.03/0/0, dsn=5.4.6, status=bounced (mail for toushirou.duckcorp.org loops back to myself)
Oct 30 02:59:07 toushirou postfix/cleanup[11818]: 3yQHhq2SKMz15SM: message-id=<3yQHhq2SKMz15SM@toushirou.duckcorp.org>
Oct 30 02:59:07 toushirou postfix/qmgr[2940]: 3yQHhq2SKMz15SM: from=<>, size=12471, nrcpt=1 (queue active)
Oct 30 02:59:07 toushirou postfix/bounce[11821]: 3yQHhq1LrRz15SL: sender non-delivery notification: 3yQHhq2SKMz15SM
Oct 30 02:59:07 toushirou postfix/qmgr[2940]: 3yQHhq1LrRz15SL: removed
Oct 30 02:59:07 toushirou postfix/smtp[11820]: 3yQHhq2SKMz15SM: to=<daemon@toushirou.duckcorp.org>, relay=none, delay=0.1, delays=0.09/0/0/0, dsn=5.4.6, status=bounced (mail for toushirou.duckcorp.org loops back to myself)
</pre>
DuckCorp Infrastructure - Bug #578 (Resolved): Nicecity: local lxc-net service overrides the one ...
https://projects.duckcorp.org/issues/578
2017-07-24T02:59:43Z
Pierre-Louis Bonicoli
pierre-louis.bonicoli@ir5.eu
<p><code>/etc/systemd/system/lxc-net.service</code> overrides <code>/lib/systemd/system/lxc-net.service</code> which belongs to LXC package.</p>
<p>It seems both files provides same features (<a href="https://wiki.debian.org/LXC/SimpleBridge#Using_lxc-net" class="external">see</a>), <code>/etc/systemd/system/lxc-net.service</code> should be removed and <code>/etc/default/lxc-net should be added</code>.</p>
DuckCorp Infrastructure - Bug #513 (Resolved): Mailman: DMARC checks are enabled and could fail
https://projects.duckcorp.org/issues/513
2017-02-28T13:08:45Z
Pierre-Louis Bonicoli
pierre-louis.bonicoli@ir5.eu
<p>DMARC checks are enabled by <code>Mailman</code>.</p>
<p>When a DMARC policy is defined in the sender domain, mails can be rejected:</p>
<p><code>/var/log/mailman/vette</code><br /><pre>
Feb 28 10:36:38 2017 (7194) DMARC lookup for no-reply@microsoft.com (_dmarc.microsoft.com) found p=reject in _dmarc.microsoft.com. = v=DMARC1; p=reject; pct=100; rua=mailto:d@rua.agari.com; ruf=mailto:d@ruf.agari.com; fo=1
Feb 28 10:36:38 2017 (7194) Message discarded, msgid: <40d1e0c8-6fe4-4fd6-acfc-5e359d1960b2@BN1AFFO11OLC003.protection.gbl>
</pre></p>
<pre>
Received-SPF: None (protection.outlook.com: microsoft.com does not designate
permitted sender hosts)
Authentication-Results: spf=none (sender IP is )
smtp.mailfrom=no-reply@microsoft.com;
</pre>
<p><strong>What puzzles me is what/who added the <code>Received-SPF</code> header in the rejected mail ?</strong></p>
References:
<ul>
<li><a class="external" href="https://tools.ietf.org/html/rfc7208#section-9">https://tools.ietf.org/html/rfc7208#section-9</a><br /><pre>
The Received-SPF header field is a trace field (see [RFC5322],
Section 3.6.7) and SHOULD be prepended to the existing header, above
the Received: field that is generated by the SMTP receiver.
</pre></li>
</ul>
DuckCorp Infrastructure - Bug #504 (Rejected): Backups are failing due to an expired certificate
https://projects.duckcorp.org/issues/504
2017-02-01T10:02:10Z
Pierre-Louis Bonicoli
pierre-louis.bonicoli@ir5.eu
<pre>
[root@Korutopi ~]# openssl x509 -in /etc/bacula/certs/duckcorp-backup_bacula_korutopi.crt -text | grep After
Not After : Jul 29 15:14:44 2016 GMT
</pre>
<pre>
01-Feb 10:00 Korutopi-dir JobId 18455: sql_get.c:391 No volumes found for JobId=18452
01-Feb 10:00 Korutopi-dir JobId 18455: No prior or suitable Full backup found in catalog. Doing FULL backup.
01-Feb 10:00 Korutopi-dir JobId 18455: Start Backup JobId 18455, Job=Thorfinn-general-data.2017-02-01_10.00.00_24
01-Feb 10:00 Korutopi-dir JobId 18455: Error: tls.c:92 Error with certificate at depth: 0, issuer = /C=DL/ST=DuckLand/L=DuckCity/O=DuckCorp/OU=DuckCorp Backup Department/CN=DuckCorp Backup CA/emailAddress=admin@duckcorp.org, subject = /C=DL/ST=DuckLand/L=DuckCity/O=DuckCorp/OU=DuckCorp Backup Department/CN=korutopi.duckcorp.org/emailAddress=admin@duckcorp.org, ERR=10:certificate has expired
01-Feb 10:00 Korutopi-dir JobId 18455: Error: openssl.c:86 Connect failure: ERR=error:14090086:SSL routines:SSL3_GET_SERVER_CERTIFICATE:certificate verify failed
01-Feb 10:00 Korutopi-dir JobId 18455: Fatal error: TLS negotiation failed with SD at "korutopi.duckcorp.org:30003"
01-Feb 10:00 Korutopi-dir JobId 18455: Error: Bacula Korutopi-dir 5.2.6 (21Feb12):
Build OS: x86_64-pc-linux-gnu debian jessie/sid
JobId: 18455
Job: Thorfinn-general-data.2017-02-01_10.00.00_24
Backup Level: Full (upgraded from Incremental)
Client: "Thorfinn-fd" 5.2.6 (21Feb12) x86_64-pc-linux-gnu,debian,jessie/sid
FileSet: "GeneralData Set" 2012-06-24 14:00:00
Pool: "GeneralData-Full" (From Job FullPool override)
Catalog: "DcCatalog" (From Client resource)
Storage: "File" (From Pool resource)
Scheduled time: 01-Feb-2017 10:00:00
Start time: 01-Feb-2017 10:00:24
End time: 01-Feb-2017 10:00:35
Elapsed time: 11 secs
Priority: 50
FD Files Written: 0
SD Files Written: 0
FD Bytes Written: 0 (0 B)
SD Bytes Written: 0 (0 B)
Rate: 0.0 KB/s
Software Compression: None
VSS: no
Encryption: no
Accurate: yes
Volume name(s):
Volume Session Id: 0
Volume Session Time: 0
Last Volume Bytes: 0 (0 B)
Non-fatal FD errors: 2
SD Errors: 0
FD termination status:
SD termination status:
Termination: *** Backup Error ***
01-Feb 10:00 Korutopi-dir JobId 18455: Rescheduled Job Thorfinn-general-data.2017-02-01_10.00.00_24 at 01-Feb-2017 10:00 to re-run in 3600 seconds (01-Feb-2017 11:00).
01-Feb 10:00 Korutopi-dir JobId 18455: Job Thorfinn-general-data.2017-02-01_10.00.00_24 waiting 3600 seconds for scheduled start time.
01-Feb 11:00 Korutopi-dir JobId 18455: Start Backup JobId 18455, Job=Thorfinn-general-data.2017-02-01_10.00.00_24
01-Feb 11:00 Korutopi-dir JobId 18455: Error: tls.c:92 Error with certificate at depth: 0, issuer = /C=DL/ST=DuckLand/L=DuckCity/O=DuckCorp/OU=DuckCorp Backup Department/CN=DuckCorp Backup CA/emailAddress=admin@duckcorp.org, subject = /C=DL/ST=DuckLand/L=DuckCity/O=DuckCorp/OU=DuckCorp Backup Department/CN=korutopi.duckcorp.org/emailAddress=admin@duckcorp.org, ERR=10:certificate has expired
01-Feb 11:00 Korutopi-dir JobId 18455: Error: openssl.c:86 Connect failure: ERR=error:14090086:SSL routines:SSL3_GET_SERVER_CERTIFICATE:certificate verify failed
01-Feb 11:00 Korutopi-dir JobId 18455: Fatal error: TLS negotiation failed with SD at "korutopi.duckcorp.org:30003"
01-Feb 11:00 Korutopi-dir JobId 18455: Error: Bacula Korutopi-dir 5.2.6 (21Feb12):
Build OS: x86_64-pc-linux-gnu debian jessie/sid
JobId: 18455
Job: Thorfinn-general-data.2017-02-01_10.00.00_24
Backup Level: Full (upgraded from Incremental)
Client: "Thorfinn-fd" 5.2.6 (21Feb12) x86_64-pc-linux-gnu,debian,jessie/sid
FileSet: "GeneralData Set" 2012-06-24 14:00:00
Pool: "GeneralData-Full" (From Job FullPool override)
Catalog: "DcCatalog" (From Client resource)
Storage: "File" (From Pool resource)
Scheduled time: 01-Feb-2017 10:00:00
Start time: 01-Feb-2017 11:00:40
End time: 01-Feb-2017 11:00:50
Elapsed time: 10 secs
Priority: 50
FD Files Written: 0
SD Files Written: 0
FD Bytes Written: 0 (0 B)
SD Bytes Written: 0 (0 B)
Rate: 0.0 KB/s
Software Compression: None
VSS: no
Encryption: no
Accurate: yes
Volume name(s):
Volume Session Id: 0
Volume Session Time: 0
Last Volume Bytes: 0 (0 B)
Non-fatal FD errors: 2
SD Errors: 0
FD termination status:
SD termination status:
Termination: *** Backup Error ***
01-Feb 11:00 Korutopi-dir JobId 18455: Rescheduled Job Thorfinn-general-data.2017-02-01_10.00.00_24 at 01-Feb-2017 11:00 to re-run in 3600 seconds (01-Feb-2017 12:00).
01-Feb 11:00 Korutopi-dir JobId 18455: Job Thorfinn-general-data.2017-02-01_10.00.00_24 waiting 3600 seconds for scheduled start time.
01-Feb 12:01 Korutopi-dir JobId 18455: Start Backup JobId 18455, Job=Thorfinn-general-data.2017-02-01_10.00.00_24
01-Feb 12:01 Korutopi-dir JobId 18455: Error: tls.c:92 Error with certificate at depth: 0, issuer = /C=DL/ST=DuckLand/L=DuckCity/O=DuckCorp/OU=DuckCorp Backup Department/CN=DuckCorp Backup CA/emailAddress=admin@duckcorp.org, subject = /C=DL/ST=DuckLand/L=DuckCity/O=DuckCorp/OU=DuckCorp Backup Department/CN=korutopi.duckcorp.org/emailAddress=admin@duckcorp.org, ERR=10:certificate has expired
01-Feb 12:01 Korutopi-dir JobId 18455: Error: openssl.c:86 Connect failure: ERR=error:14090086:SSL routines:SSL3_GET_SERVER_CERTIFICATE:certificate verify failed
01-Feb 12:01 Korutopi-dir JobId 18455: Fatal error: TLS negotiation failed with SD at "korutopi.duckcorp.org:30003"
01-Feb 12:01 Korutopi-dir JobId 18455: Error: Bacula Korutopi-dir 5.2.6 (21Feb12):
Build OS: x86_64-pc-linux-gnu debian jessie/sid
JobId: 18455
Job: Thorfinn-general-data.2017-02-01_10.00.00_24
Backup Level: Full (upgraded from Incremental)
Client: "Thorfinn-fd" 5.2.6 (21Feb12) x86_64-pc-linux-gnu,debian,jessie/sid
FileSet: "GeneralData Set" 2012-06-24 14:00:00
Pool: "GeneralData-Full" (From Job FullPool override)
Catalog: "DcCatalog" (From Client resource)
Storage: "File" (From Pool resource)
Scheduled time: 01-Feb-2017 10:00:00
Start time: 01-Feb-2017 12:01:07
End time: 01-Feb-2017 12:01:19
Elapsed time: 12 secs
Priority: 50
FD Files Written: 0
SD Files Written: 0
FD Bytes Written: 0 (0 B)
SD Bytes Written: 0 (0 B)
Rate: 0.0 KB/s
Software Compression: None
VSS: no
Encryption: no
Accurate: yes
Volume name(s):
Volume Session Id: 0
Volume Session Time: 0
Last Volume Bytes: 0 (0 B)
Non-fatal FD errors: 2
SD Errors: 0
FD termination status:
SD termination status:
Termination: *** Backup Error ***
01-Feb 12:01 Korutopi-dir JobId 18455: Rescheduled Job Thorfinn-general-data.2017-02-01_10.00.00_24 at 01-Feb-2017 12:01 to re-run in 3600 seconds (01-Feb-2017 13:01).
01-Feb 12:01 Korutopi-dir JobId 18455: Job Thorfinn-general-data.2017-02-01_10.00.00_24 waiting 3600 seconds for scheduled start time.
01-Feb 13:01 Korutopi-dir JobId 18455: Start Backup JobId 18455, Job=Thorfinn-general-data.2017-02-01_10.00.00_24
01-Feb 13:01 Korutopi-dir JobId 18455: Error: tls.c:92 Error with certificate at depth: 0, issuer = /C=DL/ST=DuckLand/L=DuckCity/O=DuckCorp/OU=DuckCorp Backup Department/CN=DuckCorp Backup CA/emailAddress=admin@duckcorp.org, subject = /C=DL/ST=DuckLand/L=DuckCity/O=DuckCorp/OU=DuckCorp Backup Department/CN=korutopi.duckcorp.org/emailAddress=admin@duckcorp.org, ERR=10:certificate has expired
01-Feb 13:01 Korutopi-dir JobId 18455: Error: openssl.c:86 Connect failure: ERR=error:14090086:SSL routines:SSL3_GET_SERVER_CERTIFICATE:certificate verify failed
01-Feb 13:01 Korutopi-dir JobId 18455: Fatal error: TLS negotiation failed with SD at "korutopi.duckcorp.org:30003"
01-Feb 13:01 Korutopi-dir JobId 18455: Error: Bacula Korutopi-dir 5.2.6 (21Feb12):
Build OS: x86_64-pc-linux-gnu debian jessie/sid
JobId: 18455
Job: Thorfinn-general-data.2017-02-01_10.00.00_24
Backup Level: Full (upgraded from Incremental)
Client: "Thorfinn-fd" 5.2.6 (21Feb12) x86_64-pc-linux-gnu,debian,jessie/sid
FileSet: "GeneralData Set" 2012-06-24 14:00:00
Pool: "GeneralData-Full" (From Job FullPool override)
Catalog: "DcCatalog" (From Client resource)
Storage: "File" (From Pool resource)
Scheduled time: 01-Feb-2017 10:00:00
Start time: 01-Feb-2017 13:01:21
End time: 01-Feb-2017 13:01:31
Elapsed time: 10 secs
Priority: 50
FD Files Written: 0
SD Files Written: 0
FD Bytes Written: 0 (0 B)
SD Bytes Written: 0 (0 B)
Rate: 0.0 KB/s
Software Compression: None
VSS: no
Encryption: no
Accurate: yes
Volume name(s):
Volume Session Id: 0
Volume Session Time: 0
Last Volume Bytes: 0 (0 B)
Non-fatal FD errors: 2
SD Errors: 0
FD termination status:
SD termination status:
Termination: *** Backup Error ***
</pre>
DuckCorp Infrastructure - Bug #451 (Rejected): postfix and LDAP errors: ldap:/etc/postfix/ldap_re...
https://projects.duckcorp.org/issues/451
2015-05-21T10:44:27Z
Pierre-Louis Bonicoli
pierre-louis.bonicoli@ir5.eu
<blockquote>
<p>zgrep -A 1 "dict_ldap_lookup: Search error -5: Timed out" /var/log/syslog* |sort -rn</p>
</blockquote>
<pre>
/var/log/syslog.7.gz:May 15 06:32:49 orfeo postfix/cleanup[25851]: warning: ldap:/etc/postfix/ldap_redirs.cf lookup error for "arnau@duckcorp.dl"
/var/log/syslog.7.gz:May 15 06:32:49 orfeo postfix/cleanup[25851]: warning: dict_ldap_lookup: Search error -5: Timed out
/var/log/syslog.6.gz:May 16 06:42:58 orfeo postfix/cleanup[7985]: warning: ldap:/etc/postfix/ldap_redirs.cf lookup error for "duck@duckcorp.dl"
/var/log/syslog.6.gz:May 16 06:42:58 orfeo postfix/cleanup[7985]: warning: dict_ldap_lookup: Search error -5: Timed out
/var/log/syslog.3.gz:May 19 06:34:24 orfeo postfix/trivial-rewrite[25111]: warning: ldap:/etc/postfix/ldap_virtual_domains.cf: table lookup problem
/var/log/syslog.3.gz:May 19 06:34:24 orfeo postfix/trivial-rewrite[25111]: warning: dict_ldap_lookup: Search error -5: Timed out
/var/log/syslog.2.gz:May 20 06:32:51 orfeo postfix/trivial-rewrite[3709]: warning: ldap:/etc/postfix/ldap_virtual_domains.cf: table lookup problem
/var/log/syslog.2.gz:May 20 06:32:51 orfeo postfix/trivial-rewrite[3709]: warning: dict_ldap_lookup: Search error -5: Timed out
/var/log/syslog.1:May 21 06:42:36 orfeo postfix/trivial-rewrite[21147]: warning: ldap:/etc/postfix/ldap_virtual_domains.cf: table lookup problem
/var/log/syslog.1:May 21 06:42:36 orfeo postfix/trivial-rewrite[21147]: warning: dict_ldap_lookup: Search error -5: Timed out
/var/log/syslog.1:May 21 06:32:21 orfeo postfix/cleanup[16995]: warning: ldap:/etc/postfix/ldap_redirs.cf lookup error for "Duck@duckcorp.org"
/var/log/syslog.1:May 21 06:32:21 orfeo postfix/cleanup[16995]: warning: dict_ldap_lookup: Search error -5: Timed out
/var/log/syslog.1:May 21 06:28:55 orfeo postfix/trivial-rewrite[16155]: warning: ldap:/etc/postfix/ldap_virtual_domains.cf: table lookup problem
/var/log/syslog.1:May 21 06:28:55 orfeo postfix/trivial-rewrite[16155]: warning: dict_ldap_lookup: Search error -5: Timed out
</pre>
<p>All errors occur around 06:30am. LDAP server is on the same host. <code>slapd</code> process is running since <code>Apr06 12:51</code>.</p>