DuckCorp Projects: Issues
https://projects.duckcorp.org/
https://projects.duckcorp.org/favicon.ico?1669909042
2022-07-10T10:42:55Z
DuckCorp Projects
Redmine
DuckCorp Infrastructure - Bug #776 (Resolved): Users are unable to register to projects.duckcorp.org
https://projects.duckcorp.org/issues/776
2022-07-10T10:42:55Z
Pierre-Louis Bonicoli
pierre-louis.bonicoli@ir5.eu
<p>There is an issue related to the captcha:<br /><pre>
Oops, we failed to validate your reCAPTCHA response. Please try again.
</pre><br />I tried with firefox and chromium.</p>
<p><code>/var/log/redmine/dc/production.log</code> from the <code>redmine</code> LXC container:<br /><pre>
Started POST "/account/register" for 185.238.6.46 at 2022-07-10 12:53:52 +0000
Processing by AccountController#register as HTML
Parameters: {"utf8"=>"✓", "authenticity_token"=>"[REDACTED]", "user"=>{"login"=>"pilou_test", "password"=>"[FILTERED]", "password_confirmation"=>"[FILTERED]", "firstname"=>"pilou", "lastname"=>"pilou_test", "mail"=>"pilou_test@ir5.eu", "language"=>"fr"}, "g-recaptcha-response"=>"[REDACTED]", "commit"=>"Soumettre"}
Current user: anonymous
Rendering plugins/recaptcha/app/views/account/register.html.erb within layouts/base
Rendered plugins/recaptcha/app/views/account/register.html.erb within layouts/base (8.8ms)
Completed 200 OK in 3022ms (Views: 14.7ms | ActiveRecord: 1.4ms)
</pre></p>
DuckCorp Infrastructure - Bug #775 (Resolved): Ninjabot doesn't handle unreachable network
https://projects.duckcorp.org/issues/775
2022-07-10T09:29:08Z
Pierre-Louis Bonicoli
pierre-louis.bonicoli@ir5.eu
<p>Ninjabot was unable to reconnect after encountering a temporarily unreachable network:<br /><pre>
Jul 07 00:40:11 orthos.duckcorp.org ninjabot[1608725]: <= {} None PING ['irc2.duckcorp.org']
Jul 07 00:41:31 orthos.duckcorp.org ninjabot[1608725]: [126B blob data]
Jul 07 00:42:08 orthos.duckcorp.org ninjabot[1608725]: [132B blob data]
Jul 07 00:46:31 orthos.duckcorp.org ninjabot[1608725]: [129B blob data]
Jul 07 00:58:02 orthos.duckcorp.org ninjabot[1608725]: Traceback (most recent call last):
Jul 07 00:58:02 orthos.duckcorp.org ninjabot[1608725]: File "/opt/ninjabot/venv/bin/ninjabot", line 8, in <module>
Jul 07 00:58:02 orthos.duckcorp.org ninjabot[1608725]: sys.exit(ninjabot.cli())
Jul 07 00:58:02 orthos.duckcorp.org ninjabot[1608725]: File "/opt/ninjabot/venv/lib/python3.9/site-packages/ninjabot/ninjabot.py", line 38, in cli
Jul 07 00:58:02 orthos.duckcorp.org ninjabot[1608725]: client.start()
Jul 07 00:58:02 orthos.duckcorp.org ninjabot[1608725]: File "/opt/ninjabot/venv/lib/python3.9/site-packages/py_irc/irc.py", line 99, in start
Jul 07 00:58:02 orthos.duckcorp.org ninjabot[1608725]: buf = self.socket.recv(4096)
Jul 07 00:58:02 orthos.duckcorp.org ninjabot[1608725]: File "/usr/lib/python3.9/ssl.py", line 1226, in recv
Jul 07 00:58:02 orthos.duckcorp.org ninjabot[1608725]: return self.read(buflen)
Jul 07 00:58:02 orthos.duckcorp.org ninjabot[1608725]: File "/usr/lib/python3.9/ssl.py", line 1101, in read
Jul 07 00:58:02 orthos.duckcorp.org ninjabot[1608725]: return self._sslobj.read(len)
Jul 07 00:58:02 orthos.duckcorp.org ninjabot[1608725]: OSError: [Errno 101] Network is unreachable
Jul 07 04:56:32 orthos.duckcorp.org ninjabot[1608725]: [127B blob data]
Jul 07 04:56:32 orthos.duckcorp.org ninjabot[1608725]: Connection broke up
Jul 07 04:56:32 orthos.duckcorp.org ninjabot[1608725]: Attemting to connect to irc.milkypond.org
Jul 07 04:56:32 orthos.duckcorp.org ninjabot[1608725]: Connected to irc.milkypond.org
</pre><br />The bot wasn't connected at <code>04:56:32</code>, a manual restart of the service was required.</p>
DuckCorp Infrastructure - Bug #769 (Rejected): Toushirou get stuck randomly at boot
https://projects.duckcorp.org/issues/769
2022-04-15T23:36:48Z
Pierre-Louis Bonicoli
pierre-louis.bonicoli@ir5.eu
<p>Toushirou get stuck randomly at boot.</p>
Another reboot party needs to be planned in order to assess this issue:
<ul>
<li><a href="https://www.askapache.com/linux/linux-debugging/" class="external">kernel parameters</a>: <code>debug ignore_loglevel log_buf_len=10M print_fatal_signals=1 LOGLEVEL=8 earlyprintk=vga,keep sched_debug console=ttyS0,115200 systemd.log_level=debug</code></li>
<li><a href="https://www.suse.com/support/kb/doc/?id=000019461" class="external">step by step systemd boot process</a></li>
<li><a class="external" href="https://wiki.debian.org/systemd#systemd_hangs_on_startup_or_shutdown">https://wiki.debian.org/systemd#systemd_hangs_on_startup_or_shutdown</a></li>
</ul>
<p>Pictures:<br /><img src="https://projects.duckcorp.org/attachments/download/167/2022-04-13-185627_001.jpeg" loading="lazy" style="width: 50%;" alt="" /><br /><img src="https://projects.duckcorp.org/attachments/download/168/2022-04-13-185651_001.jpeg" loading="lazy" style="width: 50%;" alt="" /></p>
DuckCorp Infrastructure - Bug #766 (Resolved): Orfeo postman[1643199]: /usr/lib/ruby/vendor_ruby/...
https://projects.duckcorp.org/issues/766
2022-03-27T19:32:48Z
Pierre-Louis Bonicoli
pierre-louis.bonicoli@ir5.eu
<pre>
Mar 27 23:29:04 Orfeo postman[1643199]: /usr/lib/ruby/vendor_ruby/rubygems/core_ext/kernel_require.rb:85:in `require': cannot load such file -- active_ldap (LoadError)
Mar 27 23:29:04 Orfeo postman[1643199]: from /usr/lib/ruby/vendor_ruby/rubygems/core_ext/kernel_require.rb:85:in `require'
Mar 27 23:29:04 Orfeo postman[1643199]: from /opt/cyborghood/lib/cyborghood/objects/ldap.rb:24:in `<top (required)>'
Mar 27 23:29:04 Orfeo postman[1643199]: from /usr/lib/ruby/vendor_ruby/rubygems/core_ext/kernel_require.rb:85:in `require'
Mar 27 23:29:04 Orfeo postman[1643199]: from /usr/lib/ruby/vendor_ruby/rubygems/core_ext/kernel_require.rb:85:in `require'
Mar 27 23:29:04 Orfeo postman[1643199]: from /opt/cyborghood/lib/cyborghood/objects.rb:20:in `<top (required)>'
Mar 27 23:29:04 Orfeo postman[1643199]: from /usr/lib/ruby/vendor_ruby/rubygems/core_ext/kernel_require.rb:85:in `require'
Mar 27 23:29:04 Orfeo postman[1643199]: from /usr/lib/ruby/vendor_ruby/rubygems/core_ext/kernel_require.rb:85:in `require'
Mar 27 23:29:04 Orfeo postman[1643199]: from /opt/cyborghood/lib/cyborghood/mail.rb:22:in `<top (required)>'
Mar 27 23:29:04 Orfeo postman[1643199]: from /usr/lib/ruby/vendor_ruby/rubygems/core_ext/kernel_require.rb:85:in `require'
Mar 27 23:29:04 Orfeo postman[1643199]: from /usr/lib/ruby/vendor_ruby/rubygems/core_ext/kernel_require.rb:85:in `require'
Mar 27 23:29:04 Orfeo postman[1643199]: from /opt/cyborghood/bin/postman:30:in `<main>'
Mar 27 23:29:04 Orfeo systemd[1]: cyborghood_postman.service: Main process exited, code=exited, status=1/FAILURE
</pre>
DuckCorp Infrastructure - Bug #746 (Rejected): unexpected restart of Toushirou host
https://projects.duckcorp.org/issues/746
2021-12-13T14:16:57Z
Pierre-Louis Bonicoli
pierre-louis.bonicoli@ir5.eu
<p>Today Toushirou was restarted unexpectedly. It seems that this restart wasn't due a command.</p>
<p>The server was restarted after <code>Dec 13 10:07:03</code> (UTC+1). I unlocked the encrypted encryption around 13h15 (UTC+1).</p>
<p><code>syslog</code> contains:<br /><pre>
Dec 13 10:06:52 Toushirou postfix/smtpd[1353160]: disconnect from <redacted> ehlo=2 starttls=1 mail=1 rcpt=1 bdat=1 quit=1 commands=7
Dec 13 10:07:03 Toushirou stunnel: LOG5[8632]: Connection closed: 182 byte(s) sent to TLS, 20 byte(s) sent to socket
@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@
[...]
@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@^@
Dec 13 13:18:38 Toushirou systemd-udevd[631]: Using default interface naming scheme 'v247'.
Dec 13 13:18:38 Toushirou systemd-udevd[630]: Using default interface naming scheme 'v247'.
Dec 13 13:18:38 Toushirou lvm[578]: 3 logical volume(s) in volume group "extra" monitored
</pre></p>
<p>The filesystem journals were recovered:<br /><pre>
Dec 13 13:18:38 Toushirou systemd-fsck[791]: /dev/md0 was not cleanly unmounted, check forced.
Dec 13 13:18:38 Toushirou systemd-fsck[790]: /dev/mapper/main-ldap: recovering journal
Dec 13 13:18:38 Toushirou systemd-fsck[790]: /dev/mapper/main-ldap: clean, 14/23616 files, 9468/94208 blocks
Dec 13 13:18:38 Toushirou systemd[1]: Finished File System Check on /dev/mapper/main-ldap.
Dec 13 13:18:38 Toushirou systemd-fsck[787]: /dev/mapper/main-ftp: recovering journal
Dec 13 13:18:38 Toushirou systemd-fsck[787]: /dev/mapper/main-ftp: clean, 1042/1966080 files, 4094072/7864320 blocks
Dec 13 13:18:38 Toushirou systemd[1]: Finished File System Check on /dev/mapper/main-ftp.
Dec 13 13:18:38 Toushirou systemd-fsck[794]: /dev/mapper/main-logs: recovering journal
Dec 13 13:18:38 Toushirou systemd-fsck[794]: /dev/mapper/main-logs: Clearing orphaned inode 524490 (uid=0, gid=4, mode=0100640, size=186)
Dec 13 13:18:38 Toushirou systemd-fsck[794]: /dev/mapper/main-logs: Clearing orphaned inode 525136 (uid=0, gid=4, mode=0100640, size=2261619)
[...]
Dec 13 13:18:38 Toushirou systemd-fsck[794]: /dev/mapper/main-logs: clean, 3025/915712 files, 701679/3661824 blocks
Dec 13 13:18:38 Toushirou systemd[1]: Finished File System Check on /dev/mapper/main-logs.
Dec 13 13:18:38 Toushirou systemd-fsck[797]: /dev/mapper/main-mysql: recovering journal
Dec 13 13:18:38 Toushirou systemd-fsck[797]: /dev/mapper/main-mysql: clean, 1706/305216 files, 302945/1220608 blocks
Dec 13 13:18:38 Toushirou systemd[1]: Finished File System Check on /dev/mapper/main-mysql.
Dec 13 13:18:38 Toushirou systemd-fsck[801]: /dev/mapper/main-projects: recovering journal
Dec 13 13:18:38 Toushirou systemd-fsck[801]: /dev/mapper/main-projects: clean, 15384/977280 files, 2501362/3932160 blocks
Dec 13 13:18:38 Toushirou systemd[1]: Finished File System Check on /dev/mapper/main-projects.
Dec 13 13:18:38 Toushirou systemd-fsck[805]: /dev/mapper/main-stuffcloud: recovering journal
Dec 13 13:18:38 Toushirou systemd-fsck[805]: /dev/mapper/main-stuffcloud: clean, 184647/8519680 files, 22560629/34078720 blocks
Dec 13 13:18:38 Toushirou systemd[1]: Finished File System Check on /dev/mapper/main-stuffcloud.
Dec 13 13:18:38 Toushirou systemd-fsck[810]: /dev/mapper/main-var: recovering journal
Dec 13 13:18:38 Toushirou systemd-fsck[810]: /dev/mapper/main-var: Clearing orphaned inode 136445 (uid=0, gid=0, mode=0100664, size=11567160)
Dec 13 13:18:38 Toushirou systemd-fsck[810]: /dev/mapper/main-var: Clearing orphaned inode 136045 (uid=0, gid=0, mode=0100664, size=9253600)
[...]
Dec 13 13:18:38 Toushirou systemd-fsck[810]: /dev/mapper/main-var: clean, 43941/305216 files, 677459/1220608 blocks
Dec 13 13:18:38 Toushirou systemd[1]: Finished File System Check on /dev/mapper/main-var.
Dec 13 13:18:38 Toushirou systemd-fsck[811]: /dev/mapper/main-tmp: recovering journal
Dec 13 13:18:38 Toushirou systemd-fsck[811]: /dev/mapper/main-tmp: Clearing orphaned inode 20 (uid=0, gid=0, mode=0100666, size=0)
Dec 13 13:18:38 Toushirou systemd-fsck[811]: /dev/mapper/main-tmp: Clearing orphaned inode 50 (uid=128, gid=136, mode=0100600, size=0)
[...]
Dec 13 13:18:38 Toushirou systemd-fsck[811]: /dev/mapper/main-tmp: clean, 3380/121920 files, 20791/487424 blocks
Dec 13 13:18:38 Toushirou systemd[1]: Finished File System Check on /dev/mapper/main-tmp.
Dec 13 13:18:38 Toushirou systemd-fsck[814]: /dev/mapper/main-vcs: recovering journal
Dec 13 13:18:38 Toushirou systemd-fsck[814]: /dev/mapper/main-vcs: clean, 62639/183264 files, 334140/732160 blocks
Dec 13 13:18:38 Toushirou systemd[1]: Finished File System Check on /dev/mapper/main-vcs.
Dec 13 13:18:38 Toushirou systemd-fsck[817]: /dev/mapper/main-vmail: recovering journal
Dec 13 13:18:38 Toushirou systemd-fsck[817]: /dev/mapper/main-vmail: Clearing orphaned inode 1314229 (uid=5111, gid=5111, mode=0100600, size=2543956)
[...]
Dec 13 13:18:38 Toushirou systemd-fsck[817]: /dev/mapper/main-vmail: clean, 38189/1966080 files, 3862291/7864320 blocks
Dec 13 13:18:38 Toushirou systemd[1]: Finished File System Check on /dev/mapper/main-vmail.
Dec 13 13:18:38 Toushirou systemd[1]: Finished File System Check on /dev/mapper/extra-lxd.
Dec 13 13:18:38 Toushirou systemd-fsck[827]: /dev/mapper/extra-home: recovering journal
Dec 13 13:18:38 Toushirou systemd-fsck[827]: /dev/mapper/extra-home: clean, 576437/19660800 files, 60022856/78643200 blocks
Dec 13 13:18:38 Toushirou systemd[1]: Finished File System Check on /dev/mapper/extra-home.
Dec 13 13:18:38 Toushirou systemd-fsck[791]: /dev/md0: 348/64000 files (23.9% non-contiguous), 63264/255936 blocks
Dec 13 13:18:38 Toushirou systemd-fsck[819]: /dev/mapper/main-www: recovering journal
Dec 13 13:18:38 Toushirou systemd-fsck[819]: /dev/mapper/main-www: clean, 417149/9175040 files, 7579187/36700160 blocks
Dec 13 13:18:38 Toushirou systemd[1]: Finished File System Check on /dev/mapper/main-www.
</pre></p>
<p>Thanks to GuiHome and Victor for letting me know that the NextCloud service was unavailable.</p>
<p>Once the server has been restarted there was an error with the hivane network link. Hence some service were unavailable. The nerim link worked. <br /><pre>
root@Toushirou:~# systemctl --failed
UNIT LOAD ACTIVE SUB DESCRIPTION
● apache2.service loaded failed failed The Apache HTTP Server
● ifup@eth\x2dwan\x2dhivane.service loaded failed failed ifup for eth-wan-hivane
● matrix-appservice-irc.service loaded failed failed Matrix AppService IRC
● networking.service loaded failed failed Raise network interfaces
</pre></p>
<pre>
root@Toushirou:~# ifdown --force eth-wan-hivane
RTNETLINK answers: Cannot assign requested address
RTNETLINK answers: Cannot assign requested address
root@Toushirou:~# ifup --force eth-wan-hivane
Waiting for DAD... Timed out
ifup: failed to bring up eth-wan-hivane
</pre>
<p>I remember the timed out issue occurred when the last time the server was moved from a rack to another. I tried the <code>ifdown</code>/<code>ifup</code> commands several times (until the <code>Timed out</code> disappeared).</p>
<p>The logs show that the timed out issue occurred at boot:<br /><pre>
Dec 13 13:18:45 Toushirou sh[1562]: Waiting for DAD... Timed out
Dec 13 13:18:45 Toushirou sh[1496]: ifup: failed to bring up eth-wan-hivane
</pre></p>
<p>Next I restarted <code>apache2.service</code> and <code>matrix-appservice-irc.service</code>, then I updated <code>/lib/systemd/system/lxd.socket</code> in order to fix a typo:<br /><pre>Dec 13 15:48:22 Toushirou systemd[1]: /lib/systemd/system/lxd.socket:8: Unit must be of type service, ignoring: lxd.servcie
</pre><br />After that i ran <code>systemctl daemon-reload</code> and <code>lxc list</code> then the redmine LXC container restarted.</p>
<p>At this time I tried to create this issue using redmine:https://projects.duckcorp.org/ but an issue occurred after i tried to authenticate: the redmine web interface showed an error: <code>"Cannot assign requested address - connect(2) for [2001:67c:1740:9001::c1c8:2ab1]:636"</code>.</p>
<p>The restart of the <code>slapd</code> service (which was listening on IPv6 but not IPv4) fixed this issue.</p>
DuckCorp Infrastructure - Bug #726 (Resolved): /etc/stunnel/certs/duckcorp_stunnel_redis_Orfeo.pe...
https://projects.duckcorp.org/issues/726
2021-07-08T22:43:06Z
Pierre-Louis Bonicoli
pierre-louis.bonicoli@ir5.eu
<p>On Orfeo: <code>/etc/stunnel/certs/duckcorp_stunnel_redis_Orfeo.pem</code> certificate is expired.</p>
DuckCorp Infrastructure - Enhancement #719 (Rejected): redmine role depends on unversioned patches
https://projects.duckcorp.org/issues/719
2021-02-12T00:13:40Z
Pierre-Louis Bonicoli
pierre-louis.bonicoli@ir5.eu
<p><a href="https://projects.duckcorp.org/projects/dc-admin/repository/ansible-role-redmine/revisions/master/entry/tasks/plugins.yml#L34" class="external">The plugin patches aren't versioned</a> , they are stored on the filesystem where redmine is installed.</p>
<p>The patches should be moved in the repository where the Ansible inventory is located.</p>
DuckCorp Infrastructure - Review #705 (Rejected): ansible-role-httpd_php_fpm: create Unix group u...
https://projects.duckcorp.org/issues/705
2020-07-08T19:49:29Z
Pierre-Louis Bonicoli
pierre-louis.bonicoli@ir5.eu
<p>Repository/branch: <a href="https://vcs-git-viewer.duckcorp.org/?p=duckcorp/ansible-role-httpd_php_fpm" class="external"><code>ansible-role-httpd_php_fpm/create_unix_group_for_pool_workers</code></a></p>
<p>Create Unix group used for pool workers.</p>
<p>Fix this error:</p>
<pre>
TASK [zabbix : Generate Zabbix UI configuration]
task path: duckcorp-infra/ansible/roles/zabbix/tasks/webui.yml:30
fatal: [Orthos]: FAILED! => {
"changed": false,
"owner": "root",
"group": "root",
"mode": "0644",
"msg": "chgrp failed: failed to look up group php_sup.duckcorp.org",
"path": "/etc/zabbix/zabbix.conf.php",
"state": "file",
}
</pre>
DuckCorp Infrastructure - Enhancement #615 (Rejected): new Toushirou: configuration migration
https://projects.duckcorp.org/issues/615
2018-04-23T14:41:26Z
Pierre-Louis Bonicoli
pierre-louis.bonicoli@ir5.eu
<p>This issue regroups tasks related to Toushirou setup.</p>
DuckCorp Infrastructure - Bug #605 (Rejected): No mail since 2017-10-15 07:00:02
https://projects.duckcorp.org/issues/605
2017-10-16T10:53:44Z
Pierre-Louis Bonicoli
pierre-louis.bonicoli@ir5.eu
<p>On orfeo, Oct 15 07:00:00 (UTC+2), policyd-weight daemon was unable to restart, then all incoming mail were rejected.</p>
<pre>
# systemctl status policyd-weight.service
● policyd-weight.service - LSB: Start and stop the policyd-weight daemon
Loaded: loaded (/etc/init.d/policyd-weight; generated; vendor preset: enabled)
Active: failed (Result: exit-code) since Sun 2017-10-15 07:00:02 CEST; 1 day 5h ago
Docs: man:systemd-sysv-generator(8)
Process: 28244 ExecStop=/etc/init.d/policyd-weight stop (code=exited, status=0/SUCCESS)
Process: 28291 ExecStart=/etc/init.d/policyd-weight start (code=exited, status=1/FAILURE)
Tasks: 0 (limit: 4915)
Memory: 372.0K
CPU: 389ms
CGroup: /system.slice/policyd-weight.service
</pre>
<pre>
# grep "policyd-weight" /var/log/syslog.1
Oct 15 07:00:00 orfeo systemd[1]: Stopping LSB: Start and stop the policyd-weight daemon...
Oct 15 07:00:02 orfeo policyd-weight[28244]: Stopping policyd-weight (incl. cache): policyd-weight.
Oct 15 07:00:02 orfeo systemd[1]: Stopped LSB: Start and stop the policyd-weight daemon.
Oct 15 07:00:02 orfeo systemd[1]: Starting LSB: Start and stop the policyd-weight daemon...
Oct 15 07:00:03 orfeo policyd-weight[28291]: Starting policyd-weight: policyd-weightmaster: bind 12525: IO::Socket::INET: Address already in use Address already in use at /usr/sbin/policyd-weight line 1052.
Oct 15 07:00:03 orfeo postfix/policyd-weight[28294]: warning: err: init: master: bind 12525: IO::Socket::INET: Address already in use Address already in use at /usr/sbin/policyd-weight line 1052.
Oct 15 07:00:03 orfeo policyd-weight[28291]: failed!
Oct 15 07:00:04 orfeo systemd[1]: policyd-weight.service: Control process exited, code=exited status=1
Oct 15 07:00:04 orfeo systemd[1]: Failed to start LSB: Start and stop the policyd-weight daemon.
Oct 15 07:00:04 orfeo systemd[1]: policyd-weight.service: Unit entered failed state.
Oct 15 07:00:04 orfeo systemd[1]: policyd-weight.service: Failed with result 'exit-code'.
Oct 15 07:00:05 orfeo postfix/policyd-weight[16253]: cache killed
</pre>
<pre>
# /var/log/syslog.1 extract
Oct 15 07:00:39 orfeo postfix/smtpd[28403]: warning: connect to 127.0.0.1:12525: Connection refused
Oct 15 07:00:39 orfeo postfix/smtpd[28403]: warning: problem talking to server 127.0.0.1:12525: Connection refused
Oct 15 07:00:40 orfeo postfix/smtpd[28403]: warning: connect to 127.0.0.1:12525: Connection refused
Oct 15 07:00:40 orfeo postfix/smtpd[28403]: warning: problem talking to server 127.0.0.1:12525: Connection refused
Oct 15 07:00:40 orfeo postfix/smtpd[28403]: NOQUEUE: reject: RCPT from XXX: 451 4.3.5 <XXXX@milkypond.org>: Recipient address rejected: Server configuration problem; from=<XXX@outlook.com> to=<XXX@milkypond.org> proto=ESMTP helo=<XXX>
</pre>
<p>Thanks to rtp for pointing that.</p>
DuckCorp Infrastructure - Review #562 (Rejected): Fix "Invalid SCRIPTWHITELIST configuration opti...
https://projects.duckcorp.org/issues/562
2017-06-19T12:27:16Z
Pierre-Louis Bonicoli
pierre-louis.bonicoli@ir5.eu
<p>Could you review <code>rkhunter_lwp_request_isnt_a_dependency</code> branch ?</p>
<p><code>lwp-request</code> belongs to <code>libwww-perl</code> but <code>libwww-perl</code> isn't a dependency of <code>rkhunter</code>.</p>
DuckCorp Infrastructure - Bug #504 (Rejected): Backups are failing due to an expired certificate
https://projects.duckcorp.org/issues/504
2017-02-01T10:02:10Z
Pierre-Louis Bonicoli
pierre-louis.bonicoli@ir5.eu
<pre>
[root@Korutopi ~]# openssl x509 -in /etc/bacula/certs/duckcorp-backup_bacula_korutopi.crt -text | grep After
Not After : Jul 29 15:14:44 2016 GMT
</pre>
<pre>
01-Feb 10:00 Korutopi-dir JobId 18455: sql_get.c:391 No volumes found for JobId=18452
01-Feb 10:00 Korutopi-dir JobId 18455: No prior or suitable Full backup found in catalog. Doing FULL backup.
01-Feb 10:00 Korutopi-dir JobId 18455: Start Backup JobId 18455, Job=Thorfinn-general-data.2017-02-01_10.00.00_24
01-Feb 10:00 Korutopi-dir JobId 18455: Error: tls.c:92 Error with certificate at depth: 0, issuer = /C=DL/ST=DuckLand/L=DuckCity/O=DuckCorp/OU=DuckCorp Backup Department/CN=DuckCorp Backup CA/emailAddress=admin@duckcorp.org, subject = /C=DL/ST=DuckLand/L=DuckCity/O=DuckCorp/OU=DuckCorp Backup Department/CN=korutopi.duckcorp.org/emailAddress=admin@duckcorp.org, ERR=10:certificate has expired
01-Feb 10:00 Korutopi-dir JobId 18455: Error: openssl.c:86 Connect failure: ERR=error:14090086:SSL routines:SSL3_GET_SERVER_CERTIFICATE:certificate verify failed
01-Feb 10:00 Korutopi-dir JobId 18455: Fatal error: TLS negotiation failed with SD at "korutopi.duckcorp.org:30003"
01-Feb 10:00 Korutopi-dir JobId 18455: Error: Bacula Korutopi-dir 5.2.6 (21Feb12):
Build OS: x86_64-pc-linux-gnu debian jessie/sid
JobId: 18455
Job: Thorfinn-general-data.2017-02-01_10.00.00_24
Backup Level: Full (upgraded from Incremental)
Client: "Thorfinn-fd" 5.2.6 (21Feb12) x86_64-pc-linux-gnu,debian,jessie/sid
FileSet: "GeneralData Set" 2012-06-24 14:00:00
Pool: "GeneralData-Full" (From Job FullPool override)
Catalog: "DcCatalog" (From Client resource)
Storage: "File" (From Pool resource)
Scheduled time: 01-Feb-2017 10:00:00
Start time: 01-Feb-2017 10:00:24
End time: 01-Feb-2017 10:00:35
Elapsed time: 11 secs
Priority: 50
FD Files Written: 0
SD Files Written: 0
FD Bytes Written: 0 (0 B)
SD Bytes Written: 0 (0 B)
Rate: 0.0 KB/s
Software Compression: None
VSS: no
Encryption: no
Accurate: yes
Volume name(s):
Volume Session Id: 0
Volume Session Time: 0
Last Volume Bytes: 0 (0 B)
Non-fatal FD errors: 2
SD Errors: 0
FD termination status:
SD termination status:
Termination: *** Backup Error ***
01-Feb 10:00 Korutopi-dir JobId 18455: Rescheduled Job Thorfinn-general-data.2017-02-01_10.00.00_24 at 01-Feb-2017 10:00 to re-run in 3600 seconds (01-Feb-2017 11:00).
01-Feb 10:00 Korutopi-dir JobId 18455: Job Thorfinn-general-data.2017-02-01_10.00.00_24 waiting 3600 seconds for scheduled start time.
01-Feb 11:00 Korutopi-dir JobId 18455: Start Backup JobId 18455, Job=Thorfinn-general-data.2017-02-01_10.00.00_24
01-Feb 11:00 Korutopi-dir JobId 18455: Error: tls.c:92 Error with certificate at depth: 0, issuer = /C=DL/ST=DuckLand/L=DuckCity/O=DuckCorp/OU=DuckCorp Backup Department/CN=DuckCorp Backup CA/emailAddress=admin@duckcorp.org, subject = /C=DL/ST=DuckLand/L=DuckCity/O=DuckCorp/OU=DuckCorp Backup Department/CN=korutopi.duckcorp.org/emailAddress=admin@duckcorp.org, ERR=10:certificate has expired
01-Feb 11:00 Korutopi-dir JobId 18455: Error: openssl.c:86 Connect failure: ERR=error:14090086:SSL routines:SSL3_GET_SERVER_CERTIFICATE:certificate verify failed
01-Feb 11:00 Korutopi-dir JobId 18455: Fatal error: TLS negotiation failed with SD at "korutopi.duckcorp.org:30003"
01-Feb 11:00 Korutopi-dir JobId 18455: Error: Bacula Korutopi-dir 5.2.6 (21Feb12):
Build OS: x86_64-pc-linux-gnu debian jessie/sid
JobId: 18455
Job: Thorfinn-general-data.2017-02-01_10.00.00_24
Backup Level: Full (upgraded from Incremental)
Client: "Thorfinn-fd" 5.2.6 (21Feb12) x86_64-pc-linux-gnu,debian,jessie/sid
FileSet: "GeneralData Set" 2012-06-24 14:00:00
Pool: "GeneralData-Full" (From Job FullPool override)
Catalog: "DcCatalog" (From Client resource)
Storage: "File" (From Pool resource)
Scheduled time: 01-Feb-2017 10:00:00
Start time: 01-Feb-2017 11:00:40
End time: 01-Feb-2017 11:00:50
Elapsed time: 10 secs
Priority: 50
FD Files Written: 0
SD Files Written: 0
FD Bytes Written: 0 (0 B)
SD Bytes Written: 0 (0 B)
Rate: 0.0 KB/s
Software Compression: None
VSS: no
Encryption: no
Accurate: yes
Volume name(s):
Volume Session Id: 0
Volume Session Time: 0
Last Volume Bytes: 0 (0 B)
Non-fatal FD errors: 2
SD Errors: 0
FD termination status:
SD termination status:
Termination: *** Backup Error ***
01-Feb 11:00 Korutopi-dir JobId 18455: Rescheduled Job Thorfinn-general-data.2017-02-01_10.00.00_24 at 01-Feb-2017 11:00 to re-run in 3600 seconds (01-Feb-2017 12:00).
01-Feb 11:00 Korutopi-dir JobId 18455: Job Thorfinn-general-data.2017-02-01_10.00.00_24 waiting 3600 seconds for scheduled start time.
01-Feb 12:01 Korutopi-dir JobId 18455: Start Backup JobId 18455, Job=Thorfinn-general-data.2017-02-01_10.00.00_24
01-Feb 12:01 Korutopi-dir JobId 18455: Error: tls.c:92 Error with certificate at depth: 0, issuer = /C=DL/ST=DuckLand/L=DuckCity/O=DuckCorp/OU=DuckCorp Backup Department/CN=DuckCorp Backup CA/emailAddress=admin@duckcorp.org, subject = /C=DL/ST=DuckLand/L=DuckCity/O=DuckCorp/OU=DuckCorp Backup Department/CN=korutopi.duckcorp.org/emailAddress=admin@duckcorp.org, ERR=10:certificate has expired
01-Feb 12:01 Korutopi-dir JobId 18455: Error: openssl.c:86 Connect failure: ERR=error:14090086:SSL routines:SSL3_GET_SERVER_CERTIFICATE:certificate verify failed
01-Feb 12:01 Korutopi-dir JobId 18455: Fatal error: TLS negotiation failed with SD at "korutopi.duckcorp.org:30003"
01-Feb 12:01 Korutopi-dir JobId 18455: Error: Bacula Korutopi-dir 5.2.6 (21Feb12):
Build OS: x86_64-pc-linux-gnu debian jessie/sid
JobId: 18455
Job: Thorfinn-general-data.2017-02-01_10.00.00_24
Backup Level: Full (upgraded from Incremental)
Client: "Thorfinn-fd" 5.2.6 (21Feb12) x86_64-pc-linux-gnu,debian,jessie/sid
FileSet: "GeneralData Set" 2012-06-24 14:00:00
Pool: "GeneralData-Full" (From Job FullPool override)
Catalog: "DcCatalog" (From Client resource)
Storage: "File" (From Pool resource)
Scheduled time: 01-Feb-2017 10:00:00
Start time: 01-Feb-2017 12:01:07
End time: 01-Feb-2017 12:01:19
Elapsed time: 12 secs
Priority: 50
FD Files Written: 0
SD Files Written: 0
FD Bytes Written: 0 (0 B)
SD Bytes Written: 0 (0 B)
Rate: 0.0 KB/s
Software Compression: None
VSS: no
Encryption: no
Accurate: yes
Volume name(s):
Volume Session Id: 0
Volume Session Time: 0
Last Volume Bytes: 0 (0 B)
Non-fatal FD errors: 2
SD Errors: 0
FD termination status:
SD termination status:
Termination: *** Backup Error ***
01-Feb 12:01 Korutopi-dir JobId 18455: Rescheduled Job Thorfinn-general-data.2017-02-01_10.00.00_24 at 01-Feb-2017 12:01 to re-run in 3600 seconds (01-Feb-2017 13:01).
01-Feb 12:01 Korutopi-dir JobId 18455: Job Thorfinn-general-data.2017-02-01_10.00.00_24 waiting 3600 seconds for scheduled start time.
01-Feb 13:01 Korutopi-dir JobId 18455: Start Backup JobId 18455, Job=Thorfinn-general-data.2017-02-01_10.00.00_24
01-Feb 13:01 Korutopi-dir JobId 18455: Error: tls.c:92 Error with certificate at depth: 0, issuer = /C=DL/ST=DuckLand/L=DuckCity/O=DuckCorp/OU=DuckCorp Backup Department/CN=DuckCorp Backup CA/emailAddress=admin@duckcorp.org, subject = /C=DL/ST=DuckLand/L=DuckCity/O=DuckCorp/OU=DuckCorp Backup Department/CN=korutopi.duckcorp.org/emailAddress=admin@duckcorp.org, ERR=10:certificate has expired
01-Feb 13:01 Korutopi-dir JobId 18455: Error: openssl.c:86 Connect failure: ERR=error:14090086:SSL routines:SSL3_GET_SERVER_CERTIFICATE:certificate verify failed
01-Feb 13:01 Korutopi-dir JobId 18455: Fatal error: TLS negotiation failed with SD at "korutopi.duckcorp.org:30003"
01-Feb 13:01 Korutopi-dir JobId 18455: Error: Bacula Korutopi-dir 5.2.6 (21Feb12):
Build OS: x86_64-pc-linux-gnu debian jessie/sid
JobId: 18455
Job: Thorfinn-general-data.2017-02-01_10.00.00_24
Backup Level: Full (upgraded from Incremental)
Client: "Thorfinn-fd" 5.2.6 (21Feb12) x86_64-pc-linux-gnu,debian,jessie/sid
FileSet: "GeneralData Set" 2012-06-24 14:00:00
Pool: "GeneralData-Full" (From Job FullPool override)
Catalog: "DcCatalog" (From Client resource)
Storage: "File" (From Pool resource)
Scheduled time: 01-Feb-2017 10:00:00
Start time: 01-Feb-2017 13:01:21
End time: 01-Feb-2017 13:01:31
Elapsed time: 10 secs
Priority: 50
FD Files Written: 0
SD Files Written: 0
FD Bytes Written: 0 (0 B)
SD Bytes Written: 0 (0 B)
Rate: 0.0 KB/s
Software Compression: None
VSS: no
Encryption: no
Accurate: yes
Volume name(s):
Volume Session Id: 0
Volume Session Time: 0
Last Volume Bytes: 0 (0 B)
Non-fatal FD errors: 2
SD Errors: 0
FD termination status:
SD termination status:
Termination: *** Backup Error ***
</pre>
DuckCorp Infrastructure - Bug #451 (Rejected): postfix and LDAP errors: ldap:/etc/postfix/ldap_re...
https://projects.duckcorp.org/issues/451
2015-05-21T10:44:27Z
Pierre-Louis Bonicoli
pierre-louis.bonicoli@ir5.eu
<blockquote>
<p>zgrep -A 1 "dict_ldap_lookup: Search error -5: Timed out" /var/log/syslog* |sort -rn</p>
</blockquote>
<pre>
/var/log/syslog.7.gz:May 15 06:32:49 orfeo postfix/cleanup[25851]: warning: ldap:/etc/postfix/ldap_redirs.cf lookup error for "arnau@duckcorp.dl"
/var/log/syslog.7.gz:May 15 06:32:49 orfeo postfix/cleanup[25851]: warning: dict_ldap_lookup: Search error -5: Timed out
/var/log/syslog.6.gz:May 16 06:42:58 orfeo postfix/cleanup[7985]: warning: ldap:/etc/postfix/ldap_redirs.cf lookup error for "duck@duckcorp.dl"
/var/log/syslog.6.gz:May 16 06:42:58 orfeo postfix/cleanup[7985]: warning: dict_ldap_lookup: Search error -5: Timed out
/var/log/syslog.3.gz:May 19 06:34:24 orfeo postfix/trivial-rewrite[25111]: warning: ldap:/etc/postfix/ldap_virtual_domains.cf: table lookup problem
/var/log/syslog.3.gz:May 19 06:34:24 orfeo postfix/trivial-rewrite[25111]: warning: dict_ldap_lookup: Search error -5: Timed out
/var/log/syslog.2.gz:May 20 06:32:51 orfeo postfix/trivial-rewrite[3709]: warning: ldap:/etc/postfix/ldap_virtual_domains.cf: table lookup problem
/var/log/syslog.2.gz:May 20 06:32:51 orfeo postfix/trivial-rewrite[3709]: warning: dict_ldap_lookup: Search error -5: Timed out
/var/log/syslog.1:May 21 06:42:36 orfeo postfix/trivial-rewrite[21147]: warning: ldap:/etc/postfix/ldap_virtual_domains.cf: table lookup problem
/var/log/syslog.1:May 21 06:42:36 orfeo postfix/trivial-rewrite[21147]: warning: dict_ldap_lookup: Search error -5: Timed out
/var/log/syslog.1:May 21 06:32:21 orfeo postfix/cleanup[16995]: warning: ldap:/etc/postfix/ldap_redirs.cf lookup error for "Duck@duckcorp.org"
/var/log/syslog.1:May 21 06:32:21 orfeo postfix/cleanup[16995]: warning: dict_ldap_lookup: Search error -5: Timed out
/var/log/syslog.1:May 21 06:28:55 orfeo postfix/trivial-rewrite[16155]: warning: ldap:/etc/postfix/ldap_virtual_domains.cf: table lookup problem
/var/log/syslog.1:May 21 06:28:55 orfeo postfix/trivial-rewrite[16155]: warning: dict_ldap_lookup: Search error -5: Timed out
</pre>
<p>All errors occur around 06:30am. LDAP server is on the same host. <code>slapd</code> process is running since <code>Apr06 12:51</code>.</p>
Bip - Bug #339 (Rejected): Client side ssl not working
https://projects.duckcorp.org/issues/339
2014-06-10T14:02:00Z
Pierre-Louis Bonicoli
pierre-louis.bonicoli@ir5.eu
<p>kick wrote on irc:</p>
<blockquote>
<p>I copied my working config file from my bip 0.8.8-2<br />and I've got ssl handshake problems.. <br />I'm using a ubnutu trusty for bip 0.8.9-1 <br />I have a bip.pem set, with good owner and permissions.</p>
</blockquote>
<p>Error in client:</p>
<blockquote>
<p>connexion a échoué. Erreur : (336151568) error:14094410:SSL routines:SSL3_READ_BYTES:sslv3 alert handshake failure</p>
</blockquote>
<p>bip.log contains:</p>
<blockquote>
<p>139638493165216:error:1408A0C1:SSL routines:SSL3_GET_CLIENT_HELLO:no shared cipher:s3_srvr.c:1358:ERROR: Error in SSL handshake.</p>
</blockquote>
<p><strong>bip 0.8.8-2, sslv3</strong><br /><pre>
openssl s_client -ssl3 -connect edited.bip.server:7778
CONNECTED(00000003)
depth=0 C = fr, O = Sexy boys, OU = Bip, CN = Bip
verify error:num=18:self signed certificate
verify return:1
depth=0 C = fr, O = Sexy boys, OU = Bip, CN = Bip
verify return:1
---
Certificate chain
0 s:/C=fr/O=Sexy boys/OU=Bip/CN=Bip
i:/C=fr/O=Sexy boys/OU=Bip/CN=Bip
---
Server certificate
-----BEGIN CERTIFICATE-----
EDITED XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
-----END CERTIFICATE-----
subject=/C=fr/O=Sexy boys/OU=Bip/CN=Bip
issuer=/C=fr/O=Sexy boys/OU=Bip/CN=Bip
---
No client certificate CA names sent
---
SSL handshake has read 2318 bytes and written 364 bytes
---
New, TLSv1/SSLv3, Cipher is DHE-RSA-AES256-SHA
Server public key is 4096 bit
Secure Renegotiation IS supported
Compression: NONE
Expansion: NONE
SSL-Session:
Protocol : SSLv3
Cipher : DHE-RSA-AES256-SHA
Session-ID: EDITED XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
Session-ID-ctx:
Master-Key: EDITED XXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXXX
Key-Arg : None
PSK identity: None
PSK identity hint: None
SRP username: None
Start Time: 1402406408
Timeout : 7200 (sec)
Verify return code: 18 (self signed certificate)
</pre></p>
<p><strong>bip 0.8.8-2, tls1</strong><br /><pre>
openssl s_client -tls1 -connect server.bip.edited:7778
CONNECTED(00000003)
depth=0 C = fr, O = Sexy boys, OU = Bip, CN = Bip
verify error:num=18:self signed certificate
verify return:1
depth=0 C = fr, O = Sexy boys, OU = Bip, CN = Bip
verify return:1
---
Certificate chain
0 s:/C=fr/O=Sexy boys/OU=Bip/CN=Bip
i:/C=fr/O=Sexy boys/OU=Bip/CN=Bip
---
Server certificate
-----BEGIN CERTIFICATE-----
Edited XXXXXXXXXXXXXXXXXXXXXXX
-----END CERTIFICATE-----
subject=/C=fr/O=Sexy boys/OU=Bip/CN=Bip
issuer=/C=fr/O=Sexy boys/OU=Bip/CN=Bip
---
No client certificate CA names sent
---
SSL handshake has read 2454 bytes and written 423 bytes
---
New, TLSv1/SSLv3, Cipher is DHE-RSA-AES256-SHA
Server public key is 4096 bit
Secure Renegotiation IS supported
Compression: NONE
Expansion: NONE
SSL-Session:
Protocol : TLSv1
Cipher : DHE-RSA-AES256-SHA
Session-ID: Edited XXXXXXXXXXXXXXXXXXXXXXX
Session-ID-ctx:
Master-Key: Edited XXXXXXXXXXXXXXXXXXXXXXX
Key-Arg : None
PSK identity: None
PSK identity hint: None
SRP username: None
TLS session ticket lifetime hint: 60 (seconds)
TLS session ticket:
0000 - 0d b9 57 57 8b b7 cd bf-70 3c 72 79 d0 f4 6f 81 ..WW....p<ry..o.
0010 - e4 30 64 d1 97 96 62 05-8c ed 45 8e d8 36 d6 52 .0d...b...E..6.R
0020 - 37 65 b5 7d 6d 19 5c 8e-22 ab 31 4c a5 b9 ac 6a 7e.}m.\.".1L...j
Edited XXXXXXXXXXXXXXXXXXXXXXX
0080 - f7 cc ab e5 18 cc 33 28-b0 7a 12 46 3f 21 ba 1b ......3(.z.F?!..
0090 - c0 9b 4c 8b 61 3a 4d d4-78 e8 77 91 80 b9 ab a1 ..L.a:M.x.w.....
Start Time: 1402406391
Timeout : 7200 (sec)
Verify return code: 18 (self signed certificate)
---
</pre></p>
<p><strong>bip 0.8.9-1, sslv3</strong><br /><pre>
openssl s_client -ssl3 -connect edited:7778
CONNECTED(00000003)
140228681320096:error:14094410:SSL routines:SSL3_READ_BYTES:sslv3 alert handshake failure:s3_pkt.c:1260:SSL alert number 40
140228681320096:error:1409E0E5:SSL routines:SSL3_WRITE_BYTES:ssl handshake failure:s3_pkt.c:596:
---
no peer certificate available
---
No client certificate CA names sent
---
SSL handshake has read 7 bytes and written 0 bytes
---
New, (NONE), Cipher is (NONE)
Secure Renegotiation IS NOT supported
Compression: NONE
Expansion: NONE
SSL-Session:
Protocol : SSLv3
Cipher : 0000
Session-ID:
Session-ID-ctx:
Master-Key:
Key-Arg : None
PSK identity: None
PSK identity hint: None
SRP username: None
Start Time: 1402406211
Timeout : 7200 (sec)
Verify return code: 0 (ok)
---
</pre></p>
<p><strong>bip 0.8.9-1, tls1</strong><br /><pre>
openssl s_client -tls1 -connect edited:7778
CONNECTED(00000003)
140587600295584:error:14094410:SSL routines:SSL3_READ_BYTES:sslv3 alert handshake failure:s3_pkt.c:1260:SSL alert number 40
140587600295584:error:1409E0E5:SSL routines:SSL3_WRITE_BYTES:ssl handshake failure:s3_pkt.c:596:
---
no peer certificate available
---
No client certificate CA names sent
---
SSL handshake has read 7 bytes and written 0 bytes
---
New, (NONE), Cipher is (NONE)
Secure Renegotiation IS NOT supported
Compression: NONE
Expansion: NONE
SSL-Session:
Protocol : TLSv1
Cipher : 0000
Session-ID:
Session-ID-ctx:
Master-Key:
Key-Arg : None
PSK identity: None
PSK identity hint: None
SRP username: None
Start Time: 1402406299
Timeout : 7200 (sec)
Verify return code: 0 (ok)
</pre></p>
MyCyma - Cosmetic #3 (Rejected): Upper case acute accent is not correctly displayed
https://projects.duckcorp.org/issues/3
2008-11-23T21:32:53Z
Pierre-Louis Bonicoli
pierre-louis.bonicoli@ir5.eu
<p>In Admin UI, see attached file.</p>