DuckCorp Projects: Issueshttps://projects.duckcorp.org/https://projects.duckcorp.org/favicon.ico?16699090422023-07-09T13:51:58ZDuckCorp Projects
Redmine DuckCorp Infrastructure - Bug #783 (In Progress): Move Services out of Orfeohttps://projects.duckcorp.org/issues/7832023-07-09T13:51:58ZMarc Dequènesduck@duckcorp.org
Orfeo's RAID ! has one disk down, so let's move certain services out of it for now:
<ul>
<li>✅ PostgreSQL database -> Toushirou</li>
<li>✅ webmail -> Toushirou</li>
<li>✅ mailing-lists -> Toushirou</li>
<li>✅ XMPP -> Jinta</li>
<li>🔳 IRC services</li>
<li>🔳 (maybe, or later if things gets bad) NS1 & DDNS -> Toushirou</li>
</ul> DuckCorp Infrastructure - Bug #737 (Resolved): IrcOnWeb is sometimes rejected by the servershttps://projects.duckcorp.org/issues/7372021-10-26T14:44:31ZMarc Dequènesduck@duckcorp.org
<p>TheLounge gives this error:<br /><pre>
Closing link: (00DAAABJB@193.200.42.177) [WEBIRC: you don't match any configured WebIRC hosts.] (irc)
</pre></p>
<p>The server logs confirm:<br /><pre>
Mon Oct 25 2021 03:52:01 REMOTECGIIRC: From irc2.duckcorp.org: Connecting user 01DAAAA7C (193.200.42.177) tried to use WEBIRC but didn't match any configured WebIRC hosts.
</pre></p>
<p>Not sure what's going one, maybe a DNS lookup problem but the config did not change, the version is the same as on Buster since we used a backport and on the host the DNS and other services are working fine.</p> DuckCorp Infrastructure - Enhancement #652 (In Progress): Orfeo would like a brand new bodyhttps://projects.duckcorp.org/issues/6522019-05-08T16:47:00ZMarc Dequènesduck@duckcorp.org
<p>It is a followup of <a class="issue tracker-2 status-3 priority-4 priority-default closed parent" title="Enhancement: Toushirou would like a brand new body (Resolved)" href="https://projects.duckcorp.org/issues/537">#537</a> for Orfeo only.</p>
<p>Orfeo is old too and even if we do not need more power now it crashed last year for an undetermined reason and we should think of the future.</p>
<p>I'm still looking into the possibility of hosting it on a Elwing container using LXD. My internet connection is better even if not wonderful. And my complicated network config and Hivane L2TP tunnel are stable now. As we might never have the ability to change the machine in the current hosting I guess it's even more an interesting possibility to explore.</p> DuckCorp Infrastructure - Bug #646 (Resolved): restrict LDAP service accountshttps://projects.duckcorp.org/issues/6462019-04-21T07:47:24ZMarc Dequènesduck@duckcorp.org
<ul>
<li>check if only necessary fields are readable</li>
<li>limit which IP can auth with these accounts</li>
</ul> DuckCorp Infrastructure - Enhancement #593 (Resolved): PAM LDAP Reworkhttps://projects.duckcorp.org/issues/5932017-09-19T18:16:12ZMarc Dequènesduck@duckcorp.org
<p>Most hosts are using nslcd to handle LDAP cache and authentication/authorization filters. It proved to be a better system and I wanted to use it everywhere but Elwing and Orfeo had services in need of special authorization filters and still use nss-ldap+pam-ldap+unscd.</p>
<p>Example of the minbif PAM config with <code>pam_ldap_minbif.conf</code> containing specific LDAP filters:<br /><pre>
auth requisite pam_ldap.so config=/etc/pam_ldap_minbif.conf
account requisite pam_ldap.so config=/etc/pam_ldap_minbif.conf
session optional pam_ldap.so config=/etc/pam_ldap_minbif.conf
password requisite pam_ldap.so config=/etc/pam_ldap_minbif.conf use_authtok
</pre></p>
<p>With nslcd's <code>pam_authz_search</code> it is now possible to mix various variables and couple host+service names like this:<br /><pre>
pam_authz_search (&(objectClass=shellUser)(uid=$username)(|(allowedServices=$fqdn--$service)(allowedServices=$service)))
</pre></p>
<p>The goal is to improve the LDAP config to use these new values into <code>allowedServices</code> instead and switch to nslcd. Then we can cleanup the whole config and distribute it via Ansible.</p>
<p>Also changes in the PAM common files introduced problems (see <a class="issue tracker-1 status-6 priority-6 priority-high2 closed" title="Bug: pam-auth-update activated LDAP in common non-ldap configurations (Rejected)" href="https://projects.duckcorp.org/issues/349">#349</a>), which may open unwanted accesses, so this would also fix these problem as we would get back to <code>pam-auth-update</code> management, as intended by the Debian package maintainers.</p> DuckCorp Infrastructure - Bug #487 (Resolved): Orfeo disk is deadhttps://projects.duckcorp.org/issues/4872015-11-17T21:29:56ZMarc Dequènesduck@duckcorp.org
Original disk was:
<ul>
<li>Seagate ST973401LSUN72G</li>
<li>SAS</li>
<li>10k rpm</li>
<li>73 GB</li>
</ul> DuckCorp Infrastructure - Bug #450 (Resolved): Orfeo Threatenedhttps://projects.duckcorp.org/issues/4502015-04-26T09:32:28ZMarc Dequènesduck@duckcorp.org
<p>Alionis' business is being sold to Jaguar (not announced officially yet), and VIP users will soon be dropped. Thus Orfeo may be switched off anytime. We need to prepare for such a catastrophy.</p> DuckCorp Infrastructure - Bug #433 (Resolved): Disk failure on Toushirouhttps://projects.duckcorp.org/issues/4332015-01-28T14:25:28ZMarc Dequènesduck@duckcorp.org
<p>Disk 9ND1E99X had problems, unfortunately we missed the notification (or it was not fired).</p>
<pre>
c0 [Sun Sep 14 2014 18:05:18] ERROR Drive timeout detected: port=0
c0 [Sun Sep 14 2014 18:05:38] ERROR Drive timeout detected: port=0
</pre> DuckCorp Infrastructure - Bug #349 (Rejected): pam-auth-update activated LDAP in common non-ldap ...https://projects.duckcorp.org/issues/3492014-09-10T18:28:43ZMarc Dequènesduck@duckcorp.org
<p>we need to find a way to either prevent pam-auth-update from changing anything, or handle non-LDAP config manually instead of the opposite.</p> DuckCorp Infrastructure - Bug #305 (Resolved): Daneel's backup is downhttps://projects.duckcorp.org/issues/3052012-11-24T13:57:24ZMarc Dequènesduck@duckcorp.org
<p>Disk 1TB Barracuda 7200.12 (S/N: 6VP2B313) is dead. System is OK, but backup data are lost (were not anymore in RAID 1 to gain room).</p> DuckCorp Infrastructure - Bug #300 (Resolved): Certificates autocheck is broken with Ruby 1.9https://projects.duckcorp.org/issues/3002012-07-12T22:20:09ZMarc Dequènesduck@duckcorp.org
<pre>
/usr/lib/ruby/1.9.1/net/smtp.rb:948:in `check_response': 501 5.1.7 Bad sender address syntax (Net::SMTPSyntaxError)
from /usr/lib/ruby/1.9.1/net/smtp.rb:917:in `getok'
from /usr/lib/ruby/1.9.1/net/smtp.rb:832:in `mailfrom'
from /usr/lib/ruby/1.9.1/net/smtp.rb:659:in `send_message'
from /usr/local/sbin/check_certs_expiration:92:in `block in <main>'
from /usr/lib/ruby/1.9.1/net/smtp.rb:520:in `start'
from /usr/lib/ruby/1.9.1/net/smtp.rb:457:in `start'
from /usr/local/sbin/check_certs_expiration:91:in `<main>'
</pre>
<p>Because of this, the LDAP certificates on Orfeo and Elwing were expired ~2 days, which is totally <strong>unacceptable</strong>. I also don't know why i could not find a mail from cron to notify of this failure.</p> DuckCorp Infrastructure - Bug #168 (Resolved): Hang messages on Toushirou's network interfacehttps://projects.duckcorp.org/issues/1682010-10-28T22:57:46ZMarc Dequènesduck@duckcorp.org
<p>Several times in the syslog:<br /><pre>
Oct 24 23:24:02 Toushirou kernel: [10112933.944361] e1000e: eth-sivit NIC Link is Down
Oct 24 23:24:02 Toushirou kernel: [10112933.944826] e1000e: eth-hivane NIC Link is Down
Oct 24 23:24:06 Toushirou kernel: [10112937.429947] e1000e: eth-sivit NIC Link is Up 100 Mbps Full Duplex, Flow Control: None
Oct 24 23:24:06 Toushirou kernel: [10112937.429995] 0000:0e:00.0: eth-sivit: 10/100 speed: disabling TSO
Oct 24 23:24:06 Toushirou kernel: [10112937.665861] e1000e: eth-hivane NIC Link is Up 100 Mbps Full Duplex, Flow Control: None
Oct 24 23:24:06 Toushirou kernel: [10112937.665908] 0000:0d:00.0: eth-hivane: 10/100 speed: disabling TSO
Oct 24 23:24:13 Toushirou kernel: [10112944.660467] e1000e: eth-hivane NIC Link is Down
Oct 24 23:24:13 Toushirou kernel: [10112944.708361] e1000e: eth-sivit NIC Link is Down
Oct 24 23:24:20 Toushirou kernel: [10112951.425853] e1000e: eth-hivane NIC Link is Up 100 Mbps Full Duplex, Flow Control: None
Oct 24 23:24:20 Toushirou kernel: [10112951.425901] 0000:0d:00.0: eth-hivane: 10/100 speed: disabling TSO
Oct 24 23:24:21 Toushirou kernel: [10112952.530870] e1000e: eth-sivit NIC Link is Up 100 Mbps Full Duplex, Flow Control: None
Oct 24 23:24:21 Toushirou kernel: [10112952.530917] 0000:0e:00.0: eth-sivit: 10/100 speed: disabling TSO
Oct 24 23:51:41 Toushirou kernel: [10114592.280468] e1000e: eth-hivane NIC Link is Down
Oct 24 23:51:44 Toushirou kernel: [10114595.569852] e1000e: eth-hivane NIC Link is Up 100 Mbps Full Duplex, Flow Control: None
Oct 24 23:51:44 Toushirou kernel: [10114595.569900] 0000:0d:00.0: eth-hivane: 10/100 speed: disabling TSO
Oct 24 23:51:47 Toushirou kernel: [10114598.552951] e1000e: eth-sivit NIC Link is Down
Oct 24 23:51:49 Toushirou kernel: [10114600.948966] e1000e: eth-hivane NIC Link is Up 1000 Mbps Full Duplex, Flow Control: RX/TX
Oct 24 23:51:50 Toushirou kernel: [10114602.041831] e1000e: eth-sivit NIC Link is Up 100 Mbps Full Duplex, Flow Control: None
Oct 24 23:51:50 Toushirou kernel: [10114602.041878] 0000:0e:00.0: eth-sivit: 10/100 speed: disabling TSO
Oct 24 23:53:02 Toushirou kernel: [10114673.464968] e1000e: eth-sivit NIC Link is Up 1000 Mbps Full Duplex, Flow Control: RX/TX
Oct 25 11:23:47 Toushirou kernel: [10156118.816141] 0000:0d:00.0: eth-hivane: Detected Tx Unit Hang:
Oct 25 11:23:47 Toushirou kernel: [10156118.816143] TDH <a3>
Oct 25 11:23:47 Toushirou kernel: [10156118.816144] TDT <a9>
Oct 25 11:23:47 Toushirou kernel: [10156118.816146] next_to_use <a9>
Oct 25 11:23:47 Toushirou kernel: [10156118.816147] next_to_clean <a1>
Oct 25 11:23:47 Toushirou kernel: [10156118.816148] buffer_info[next_to_clean]:
Oct 25 11:23:47 Toushirou kernel: [10156118.816149] time_stamp <197555e76>
Oct 25 11:23:47 Toushirou kernel: [10156118.816150] next_to_watch <a3>
Oct 25 11:23:47 Toushirou kernel: [10156118.816152] jiffies <197555fd0>
Oct 25 11:23:47 Toushirou kernel: [10156118.816153] next_to_watch.status <0>
Oct 25 11:23:49 Toushirou kernel: [10156120.816159] 0000:0d:00.0: eth-hivane: Detected Tx Unit Hang:
Oct 25 11:23:49 Toushirou kernel: [10156120.816161] TDH <a3>
Oct 25 11:23:49 Toushirou kernel: [10156120.816163] TDT <a9>
Oct 25 11:23:49 Toushirou kernel: [10156120.816164] next_to_use <a9>
Oct 25 11:23:49 Toushirou kernel: [10156120.816165] next_to_clean <a1>
Oct 25 11:23:49 Toushirou kernel: [10156120.816166] buffer_info[next_to_clean]:
Oct 25 11:23:49 Toushirou kernel: [10156120.816167] time_stamp <197555e76>
Oct 25 11:23:49 Toushirou kernel: [10156120.816168] next_to_watch <a3>
Oct 25 11:23:49 Toushirou kernel: [10156120.816170] jiffies <1975561c4>
Oct 25 11:23:49 Toushirou kernel: [10156120.816171] next_to_watch.status <0>
Oct 25 11:23:51 Toushirou kernel: [10156122.816231] 0000:0d:00.0: eth-hivane: Detected Tx Unit Hang:
Oct 25 11:23:51 Toushirou kernel: [10156122.816233] TDH <a3>
Oct 25 11:23:51 Toushirou kernel: [10156122.816234] TDT <a9>
Oct 25 11:23:51 Toushirou kernel: [10156122.816236] next_to_use <a9>
Oct 25 11:23:51 Toushirou kernel: [10156122.816237] next_to_clean <a1>
Oct 25 11:23:51 Toushirou kernel: [10156122.816238] buffer_info[next_to_clean]:
Oct 25 11:23:51 Toushirou kernel: [10156122.816239] time_stamp <197555e76>
Oct 25 11:23:51 Toushirou kernel: [10156122.816240] next_to_watch <a3>
Oct 25 11:23:51 Toushirou kernel: [10156122.816241] jiffies <1975563b8>
Oct 25 11:23:51 Toushirou kernel: [10156122.816243] next_to_watch.status <0>
Oct 25 11:23:53 Toushirou kernel: [10156124.816146] 0000:0d:00.0: eth-hivane: Detected Tx Unit Hang:
Oct 25 11:23:53 Toushirou kernel: [10156124.816148] TDH <a3>
Oct 25 11:23:53 Toushirou kernel: [10156124.816150] TDT <a9>
Oct 25 11:23:53 Toushirou kernel: [10156124.816151] next_to_use <a9>
Oct 25 11:23:53 Toushirou kernel: [10156124.816152] next_to_clean <a1>
Oct 25 11:23:53 Toushirou kernel: [10156124.816153] buffer_info[next_to_clean]:
Oct 25 11:23:53 Toushirou kernel: [10156124.816154] time_stamp <197555e76>
Oct 25 11:23:53 Toushirou kernel: [10156124.816156] next_to_watch <a3>
Oct 25 11:23:53 Toushirou kernel: [10156124.816157] jiffies <1975565ac>
Oct 25 11:23:53 Toushirou kernel: [10156124.816158] next_to_watch.status <0>
</pre></p>
<p>and once:<br /><pre>
Oct 25 11:23:54 Toushirou kernel: [10156125.816029] ------------[ cut here ]------------
Oct 25 11:23:54 Toushirou kernel: [10156125.816063] WARNING: at /build/buildd-linux-2.6_2.6.32-15-amd64-PisqNL/linux-2.6-2.6.32/debian/build/source_amd64_none/net/sched/sch_generic.c:261 dev_watchdog+0xe2/0x194()
Oct 25 11:23:54 Toushirou kernel: [10156125.816137] Hardware name: PDSMi
Oct 25 11:23:54 Toushirou kernel: [10156125.816161] NETDEV WATCHDOG: eth-hivane (e1000e): transmit queue 0 timed out
Oct 25 11:23:54 Toushirou kernel: [10156125.816206] Modules linked in: tcp_diag inet_diag xfrm6_mode_ro xfrm_user ip6t_REJECT nf_conntrack_ipv6 ipt_REDIRECT xt_multiport xt_MARK ipt_REJECT ip6table_mangle ip6table_filter ip6_tables iptable_mangle iptable_nat xt_tcpudp xt_state iptable_filter ip_tables x_tables quota_v2 quota_tree dummy sit tunnel4 coretemp w83793 hwmon_vid nf_nat_ftp nf_nat nf_conntrack_ipv4 nf_defrag_ipv4 nf_conntrack_ftp nf_conntrack tun loop radeon ttm drm_kms_helper drm i3000_edac shpchp i2c_i801 i2c_algo_bit edac_core i2c_core tpm_tis rng_core pci_hotplug snd_pcsp snd_pcm snd_timer snd soundcore snd_page_alloc evdev container tpm tpm_bios button processor ext3 jbd mbcache dm_mod sd_mod crc_t10dif 3w_9xxx scsi_mod e1000e thermal thermal_sys [last unloaded: scsi_wait_scan]
Oct 25 11:23:54 Toushirou kernel: [10156125.816609] Pid: 0, comm: swapper Not tainted 2.6.32-5-amd64 #1
Oct 25 11:23:54 Toushirou kernel: [10156125.816637] Call Trace:
Oct 25 11:23:54 Toushirou kernel: [10156125.816658] <IRQ> [<ffffffff812609b6>] ? dev_watchdog+0xe2/0x194
Oct 25 11:23:54 Toushirou kernel: [10156125.816691] [<ffffffff812609b6>] ? dev_watchdog+0xe2/0x194
Oct 25 11:23:54 Toushirou kernel: [10156125.816720] [<ffffffff8104dc48>] ? warn_slowpath_common+0x77/0xa3
Oct 25 11:23:54 Toushirou kernel: [10156125.816749] [<ffffffff812608d4>] ? dev_watchdog+0x0/0x194
Oct 25 11:23:54 Toushirou kernel: [10156125.816777] [<ffffffff8104dcd0>] ? warn_slowpath_fmt+0x51/0x59
Oct 25 11:23:54 Toushirou kernel: [10156125.816807] [<ffffffff81041b78>] ? enqueue_task_fair+0x24/0x68
Oct 25 11:23:54 Toushirou kernel: [10156125.816836] [<ffffffff8103a507>] ? activate_task+0x20/0x26
Oct 25 11:23:54 Toushirou kernel: [10156125.816864] [<ffffffff8104a124>] ? try_to_wake_up+0x249/0x259
Oct 25 11:23:54 Toushirou kernel: [10156125.816892] [<ffffffff812608a8>] ? netif_tx_lock+0x3d/0x69
Oct 25 11:23:54 Toushirou kernel: [10156125.816921] [<ffffffff8124b76c>] ? netdev_drivername+0x3b/0x40
Oct 25 11:23:54 Toushirou kernel: [10156125.816949] [<ffffffff812609b6>] ? dev_watchdog+0xe2/0x194
Oct 25 11:23:54 Toushirou kernel: [10156125.816977] [<ffffffff8103fa28>] ? __wake_up+0x30/0x44
Oct 25 11:23:54 Toushirou kernel: [10156125.817005] [<ffffffff8105a25b>] ? run_timer_softirq+0x1c9/0x268
Oct 25 11:23:54 Toushirou kernel: [10156125.817035] [<ffffffff810539d6>] ? __do_softirq+0xdd/0x19f
Oct 25 11:23:54 Toushirou kernel: [10156125.817064] [<ffffffff81024d62>] ? lapic_next_event+0x18/0x1d
Oct 25 11:23:54 Toushirou kernel: [10156125.817093] [<ffffffff81011cac>] ? call_softirq+0x1c/0x30
Oct 25 11:23:54 Toushirou kernel: [10156125.817121] [<ffffffff81013903>] ? do_softirq+0x3f/0x7c
Oct 25 11:23:54 Toushirou kernel: [10156125.817148] [<ffffffff81053845>] ? irq_exit+0x36/0x76
Oct 25 11:23:54 Toushirou kernel: [10156125.817176] [<ffffffff81025827>] ? smp_apic_timer_interrupt+0x87/0x95
Oct 25 11:23:54 Toushirou kernel: [10156125.817206] [<ffffffff81011673>] ? apic_timer_interrupt+0x13/0x20
Oct 25 11:23:54 Toushirou kernel: [10156125.817233] <EOI> [<ffffffff81017dd8>] ? mwait_idle+0x72/0x7d
Oct 25 11:23:54 Toushirou kernel: [10156125.817266] [<ffffffff81017d88>] ? mwait_idle+0x22/0x7d
Oct 25 11:23:54 Toushirou kernel: [10156125.817294] [<ffffffff8100feb1>] ? cpu_idle+0xa2/0xda
Oct 25 11:23:54 Toushirou kernel: [10156125.817322] [<ffffffff814ec140>] ? early_idt_handler+0x0/0x71
Oct 25 11:23:54 Toushirou kernel: [10156125.817351] [<ffffffff814eccd1>] ? start_kernel+0x3dc/0x3e8
Oct 25 11:23:54 Toushirou kernel: [10156125.817379] [<ffffffff814ec3b7>] ? x86_64_start_kernel+0xf9/0x106
Oct 25 11:23:54 Toushirou kernel: [10156125.817408] ---[ end trace 99863d965c6ed34c ]---
Oct 25 11:23:57 Toushirou kernel: [10156128.640988] e1000e: eth-hivane NIC Link is Up 1000 Mbps Full Duplex, Flow Control: RX/TX
</pre></p> DuckCorp Infrastructure - Bug #152 (Resolved): secondary IRCd should be replacedhttps://projects.duckcorp.org/issues/1522010-09-15T08:42:41ZMarc Dequènesduck@duckcorp.org
<p>T1R is stopping all services. We should have a small delay to find a solution.</p> DuckCorp Infrastructure - Bug #104 (Rejected): High io wait on Elwinghttps://projects.duckcorp.org/issues/1042010-06-16T22:40:34ZMarc Dequènesduck@duckcorp.org
<p>We are experiencing slow responses sometimes on Elwing, and it seems to affect NFS a lot.</p>
<p>I found high io wait lasting more than a few seconds, and sometimes several minutes long.</p>
<p>I looked at a bonnie++ check:<br /><pre>
Version 1.96 ------Sequential Output------ --Sequential Input- --Random-
Concurrency 1 -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--
Machine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP /sec %CP
Elwing 8G 347 95 25089 7 16206 3 397 22 58156 5 82.4 2
Latency 112ms 11751ms 4522ms 2791ms 9326ms 17265ms
Version 1.96 ------Sequential Create------ --------Random Create--------
Elwing -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete--
files /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP
16 5138 10 +++++ +++ 6805 11 12978 25 +++++ +++ 7334 12
Latency 24172us 560us 651us 1317us 105us 252us
1.96,1.96,Elwing,1,1276727105,8G,,347,95,25089,7,16206,3,397,22,58156,5,82.4,2,16,,,,,5138,10,+++++,+++,6805,11,12978,25,+++++,+++,7334,12,112ms,11751ms,4522ms,2791ms,9326ms,17265ms,24172us,560us,651us,1317us,105us,252us
</pre></p>
<p>And for comparison, here is the same on Annael:<br /><pre>
Version 1.96 ------Sequential Output------ --Sequential Input- --Random-
Concurrency 1 -Per Chr- --Block-- -Rewrite- -Per Chr- --Block-- --Seeks--
Machine Size K/sec %CP K/sec %CP K/sec %CP K/sec %CP K/sec %CP /sec %CP
Annael 8G 682 97 73985 13 42083 5 3548 85 19836 1 271.3 4
Latency 12457us 2543ms 349ms 67604us 347s 433ms
Version 1.96 ------Sequential Create------ --------Random Create--------
Annael -Create-- --Read--- -Delete-- -Create-- --Read--- -Delete--
files /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP /sec %CP
16 15253 18 +++++ +++ +++++ +++ +++++ +++ +++++ +++ +++++ +++
Latency 11974us 623us 490us 966us 194us 113us
1.96,1.96,Annael,1,1276738616,8G,,682,97,73985,13,42083,5,3548,85,19836,1,271.3,4,16,,,,,15253,18,+++++,+++,+++++,+++,+++++,+++,+++++,+++,+++++,+++,12457us,2543ms,349ms,67604us,347s,433ms,11974us,623us,490us,966us,194us,113us
</pre></p>
<p>HD temperatures are about 43-45°C, which is a bit high, but is the same as on Daneel, which is working fine.</p>
<p>I need to investigate more.</p> DuckCorp Infrastructure - Bug #68 (Resolved): eJabberd is BROKENhttps://projects.duckcorp.org/issues/682010-04-28T17:38:01ZMarc Dequènesduck@duckcorp.org
<p>One of the pubsub table was not migrating properly, so i dropped it and restarted, as seens in a post on the official forum. The migration then went OK, and the service was working again.</p>
So, i decided it was the right time to switch to a better LDAP check, as it is now possible in this eJabberd version:
<ul>
<li>use full jid mapping, to allow hosting multiple domains later</li>
<li>use the allowedServices field for authorization<br />After restart, the server does not start properly, and reverting the configuration did not help.</li>
</ul>
<p>A short analysis showed epmd starts properly, without any error in the log, running sasl and mnesia without problem. The ejabberd process starts but hand in a <em>select</em> call, and does not even open its log to says something helpful.</p>