Difference between pages "INN" and "Sucknews"

From Wikislax
(Difference between pages)
Jump to: navigation, search
(Created page with "{{RightTOC}} == What is INN ? == [http://www.isc.org/software/inn/ INN] (InterNet News) is the leading Usenet news software, available from the [http://www.isc.org ISC] webs...")
 
(Created page with "{{RightTOC}} == What is Sucknews ? == Sucknews affords getting the newsfeeds over a regular NNTP connection with your Internet Service Provider. This comes in handy when you...")
 
Line 1: Line 1:
 
{{RightTOC}}
 
{{RightTOC}}
  
== What is INN ? ==
+
== What is Sucknews ? ==
  
[http://www.isc.org/software/inn/ INN] (InterNet News) is the leading Usenet news software, available from the [http://www.isc.org ISC] website.
+
Sucknews affords getting the newsfeeds over a regular NNTP connection with your Internet Service Provider. This comes in handy when you are not a big company and have no agreements with peer Newsgroup servers.
  
The news articles received from peer servers on the Internet can also be viewed by clients using a newsreader such as slrn or Thunderbird. It is possible to read and to answer to articles.
+
== Installing Sucknews ==
  
== Installing INN ==
+
There was a time when sucknews was available from [http://www.sucknews.org sucknews.org] but this site has disappeared and there seems to be no obvious source of sucknews update software so we'll stick with an old [{{SERVER}}/slax/download/suck-4.3.2.tar.gz version] (perfectly satisfactory anyway). Untar and install as below :
  
[http://www.isc.org/software/inn Download] and untar in /usr/local. Installation is described very well in the [http://www.eyrie.org/~eagle/software/inn/docs-2.5/ INSTALL] file also available from the isc website. INN runs as the news user. This user is present by default on Slackware, but the home directory must be changed to match INN's : '''/usr/local/news'''.
+
  # tar -C /usr/local -xvf suck-x.y.z.tar.gz
 
 
# usermod --home /usr/local/news news
 
  # tar -C /usr/local -xvf inn-x.y.z
 
 
  # cd /usr/local
 
  # cd /usr/local
  # chown -R root:root inn-x.y.z
+
  # chown -R root:root suck-x.y.z.tar.gz
  # cd inn-x.y.z
+
  # cd suck-x.y.z.tar.gz
# less INSTALL
 
 
  # ./configure --help | less
 
  # ./configure --help | less
 
  # ./configure --prefix=/usr/local/news --libdir=/usr/local/news/lib64 --mandir=/usr/local/man \
 
  # ./configure --prefix=/usr/local/news --libdir=/usr/local/news/lib64 --mandir=/usr/local/man \
  --with-sendmail --with-perl --with-python --with-berkeleydb --with-zlib --with-openssl --with-sasl
+
  --with-inn-lib=/usr/local/news/lib --with-inn-include=/usr/local/news/include --with-perl-exe=/usr/bin
 
  # make
 
  # make
 
  # make install
 
  # make install
 
  # make clean
 
  # make clean
 +
# cd /usr/local/news/bin
 +
# chown <nowiki>news:news</nowiki> lmove rpost suck testhost
  
== Configuring INN ==
+
== Running Sucknews ==
  
INN runs as the news user, so login or su as news in order not to break the file permissions. There are <u>27 configuration files !</u> but it's possible to work only with a small subset of files minimally modified. In addition the default configuration files provided with the software are very well written and commented, and man pages are available.
+
The below script affords posting the local messages and get entering messages on your Internet Service Provider site :
  
<u>inn.conf</u> : main configuration file, to specify the host, path, and certificates. As INN is ran as '''news''', let us make a copy of the server private key hat will be readable only by this user :
+
# su news
 
+
$ cd /usr/local/news/bin
  # cd /etc/ssl/private
+
$ '''vi suck.sh'''
  # cp mtakey.pem.unsecure news.mtakey.pem.unsecure
+
'''i'''
  # chown <nowiki>news:news</nowiki> news.mtakey.pem.unsecure
+
  #!/bin/sh
  # cd /usr/local/news/etc
+
  # vi inn.conf
+
  NNTP_SERVER=news.free.fr
 
+
  NEWS_PATH=/usr/local/news
<u>inn.conf</u> : general options of the program. The line '''organization''' must be modified, replacing '''A poorly-installed InterNetNews site''' by your organization name. '''ovhmethod''' is the method used to store overview data. '''ovdb''' looks more efficient than the others so we've chosen that. '''artcutoff''' is the retention duration of the articles in number of days. It is not possible to feed your site with articles older than this value, that it can be interesting to increase as by default it is only 10 days. The '''pathhost''' must contain the site '''FQDN''', that must be resolvable (for instance present in the '''/etc/hosts''' file).
+
BIN_PATH=/usr/local/news/bin
 +
SUCK_PATH=/usr/local/news/bin
 +
  BATCH_PATH=/usr/local/news/spool/outgoing/free
 +
  FILTER_PATH=$SUCK_PATH/filter.sh
 +
 +
cd $SUCK_PATH
 +
 +
######################################################
 +
# posting outgoing articles (localhost->NNTP_SERVER) #
 +
######################################################
 +
 +
echo "Sending articles..."
 +
if test -s $BATCH_PATH
 +
then
 +
$BIN_PATH/rpost $NNTP_SERVER -b $BATCH_PATH \
 +
    -f $FILTER_PATH \$\$o=/tmp/filtered \$\$i /tmp/filtered
 +
else
 +
    echo "No articles to post..."
 +
fi
 
   
 
   
  mta:                    "/usr/sbin/sendmail -oi -oem %s"
+
  ######################################################
  organization:          "studioware"
+
  # getting incoming articles (NNTP_SERVER->localhost) #
pathhost:              inner.studioware.com
+
  ######################################################
pathnews:              /usr/local/news
 
  artcutoff:              366
 
 
   
 
   
  tlscafile:              /etc/ssl/certs/cacert.pem
+
  echo Getting articles...
  tlscapath:              /etc/ssl/certs
+
if [ -e /tmp/newposts ]; then
  tlscertfile:            /etc/ssl/certs/mtacert.pem
+
    rm /tmp/newposts
  tlskeyfile:            /etc/ssl/private/'''news.'''mtakey.pem.unsecure
+
fi
 
+
  $BIN_PATH/suck $NNTP_SERVER -AL $NEWS_PATH/db/active -i 0 -n -H -K -br /tmp/newposts -c
<u>cycbuff.conf</u> : configuration of cyclic buffers. Cyclic buffers are a more efficient version of the article storage mode, in a reduced number of files or in block peripherals.
+
  if [ -e /tmp/newposts ]; then
 
+
    $BIN_PATH/rnews /tmp/newposts
  cycbuff:ONE:/var/news/cycbuffs/one:512000
+
fi
  cycbuff:TWO:/var/news/cycbuffs/two:512000
+
  cat /dev/null > $BATCH_PATH
  metacycbuff:ONETWO:ONE,TWO
+
'''<esc>'''
 
+
''':x'''
Create the files using :
+
$ chmod u+x suck.sh
 
+
$ ./suck.sh
  # mkdir -p /var/news/cycbuffs
+
Sending articles...
  # chown -R <nowiki>news:news</nowiki> /var/news
+
No articles to post...
  # chmod -R 750 /var/news
+
Getting articles...
  # usermod -s /usr/bin/bash news
+
Attempting to connect to news.free.fr
  # usermod -d /usr/local/news news
+
Using Port 119
  # su news
+
  Official host name: news.free.fr
  $ dd if=/dev/zero of=/var/news/cycbuffs/one bs=1K count=512000
+
Address: 212.27.60.38
  $ dd if=/dev/zero of=/var/news/cycbuffs/two bs=1K count=512000
+
Address: 212.27.60.39
$ chmod 640 /var/news/cycbuffs/*
+
  Address: 212.27.60.37
 +
Address: 212.27.60.40
 +
Connected to news.free.fr
 +
200 news-4.free.fr (4-2) NNRP Service Ready - newsmaster@proxad.net (posting ok)
 +
No sucknewsrc to read, creating
 +
Adding new groups from local active file to sucknewsrc
 +
New Group - adding to sucknewsrc: control
 +
  control - 1 articles 3983-3983
 +
New Group - adding to sucknewsrc: control.cancel
 +
control.cancel - 349 articles 94271852-94272200
 +
New Group - adding to sucknewsrc: control.checkgroups
 +
control.checkgroups - 4 articles 5228-5231
 +
New Group - adding to sucknewsrc: control.newgroup
 +
control.newgroup - 1 articles 73186-73186
 +
New Group - adding to sucknewsrc: control.rmgroup
 +
  control.rmgroup - 1 articles 30996-30996
 +
  New Group - adding to sucknewsrc: junk
 +
  junk - 1 articles 38322-38322
 +
  New Group - adding to sucknewsrc: alt.os.linux.slackware
 +
  alt.os.linux.slackware - 1907 articles 231211-233117
 +
Elapsed Time = 0 mins 0.72 seconds
 +
  2227 Articles to download
 +
  Deduping Elapsed Time = 0 mins 0.00 seconds
 +
Deduped, 2227 items remaining, 0 dupes removed.
 +
Total articles to download: 2227
 +
5290946 Bytes received in 1 mins 20.79 secs, BPS = 65489.2
 +
Closed connection to news.free.fr
 +
Building RNews Batch File(s)
 +
Cleaning up after myself
 +
  news@inner:/usr/local/news/bin$
 
  <ctrl>d
 
  <ctrl>d
 
  #
 
  #
  
<u>expire.ctl</u> : expiration of articles. '''remember''' indicates the period during which the message headers will be kept after body elimination. This is to avoid re-taking the articles if offered again. Other options do not apply when using cyclic buffers. In this case expiration is on a first in first out basis.
+
In the first part, messages are posted to the provider using '''rpost'''. The list of articles is obtained from the information contained in newsfeed file '''/usr/local/news/spool/outgoing/free'''. The '''-f''' option affords applying a filter to the messages so as to expurge certain headers :
  
  /remember/:366
+
  #!/bin/sh
 +
/usr/local/news/bin/sm -R $1 | sed -e "/^X-Trace/d" -e "/^NNTP-Posting-Host/d" \
 +
-e "/^Xref/d" -e "/^X-Complaints-To/d" -e "/^NNTP-Posting-Date/d" > $2
  
<u>incoming.conf</u> : this file affords defining the sites with which you have agreements and that feed you in fresh news. As there are probably none, you do not need to modify it. How are you going to feed your site then ? Using an external feeding software '''sucknews''', which presents itself to your Internet Service Provider like a simple news reader. '''sucknews''' is described in detail further on.
+
In the second part, suck gets the messages from your Internet Service Provider. The '''-AL''' option affords using the list of groups defined. '''-i 0''' indicates that there is no limit on the number of messages to get, '''-n''' is for the mode « messages identified by their numbers by the provider » '''-c''' is to update the numbers after the end of the operation, '''-br''' defines the output file, '''-H''' and '''-K''' afford skipping the historic and killfile options. The file written is afterwards used by '''rnews'''. The articles are then available.
  
<u>newsfeeds</u> : list the newsfeeds that you are going to manage. A file with the name specified will be created in '''/usr/local/news/spool/outgoing''' and will contain one line per article to post. In the following example, all the groups except '''control''' and '''junk''' will be posted.
+
== Automating Sucknews ==
  
free\
+
Automate Sucknews execution using '''crontab'''. In the example below, suck.sh is executed at 13:00 daily :
    :*,!junk,!control*\
 
    :Tf,Wnm:
 
 
 
<u>readers.conf</u> : list of access authorizations. For a general access, except to the control groups :
 
 
 
auth "theworld" {
 
    hosts: *
 
    default: "<theworld>"
 
}
 
 
access "theworld" {
 
    users: "<theworld>"
 
    newsgroups: "*,!control*,!junk"
 
    access: RPA
 
}
 
 
 
<u>storage.conf</u> : general options for articles storage. In the example, cnfs corresponds to the cyclic buffers :
 
 
 
method cnfs {
 
    newsgroups: *
 
    class: 2
 
    options: ONETWO
 
}
 
 
 
Next step is to initialize the history database :
 
 
 
# su news
 
$ cd /usr/local/news
 
$ bin/makedbz -i -s 100000 -o
 
<ctrl>d
 
 
 
'''INN''' sends maintenance mails to the '''news''' user, so we need to create the person in OpenLDAP and the mailbox in Cyrus-IMAP  :
 
 
 
# cd /usr/local/etc/openldap
 
# vi news.ldif
 
i
 
dn: cn=news,dc='''domain''',dc=com
 
objectclass: person
 
cn: news
 
sn: news
 
userPassword: myPassword
 
:x
 
# ldapadd -x -D "cn=Manager,dc=studioware,dc=com" -W -f news.ldif
 
# cyradm --user postmaster --auth plain localhost
 
Password:
 
localhost> cm user.news
 
localhost> cm user.news.Drafts
 
localhost> cm user.news.Junk
 
localhost> cm user.news.Sent
 
localhost> cm user.news.Trash
 
localhost> sq user.news 307200
 
quota:307200
 
localhost> quit
 
 
 
'''INN''' executes the daily script '''news.daily''', which writes its report in file '''news.daily''', then addressed to the '''news''' user.  Execution of '''news.daily''' must be configured in the news '''crontab'''. Here is an example for a daily execution at 13:30 :
 
  
 
  # su news
 
  # su news
 
  $ crontab -e
 
  $ crontab -e
 
  # MIN HOUR DAY MONTH DAYOFWEEK COMMAND
 
  # MIN HOUR DAY MONTH DAYOFWEEK COMMAND
  # 00 13 * * * /usr/local/news/bin/suck.sh
+
  00 13 * * * /usr/local/news/bin/suck.sh
 
  30 13 * * * /usr/local/news/bin/news.daily expireover lowmark
 
  30 13 * * * /usr/local/news/bin/news.daily expireover lowmark
 
Add usenet user '''to /etc/mail/aliases''' and run '''newaliases''' :
 
 
# redirect news
 
usenet:        news
 
 
# newaliases
 
/etc/mail/aliases: 16 aliases, longest 10 bytes, 172 bytes total
 
 
INN should now be ready to work.
 
 
== Running INN ==
 
 
To start INN automatically at system startup add these lines to the /etc/rc.d/rc.local file :
 
 
# start inn
 
if [ -x /usr/local/news/bin/rc.news ]; then
 
        echo "Starting inn: sudo -u news /usr/local/news/bin/rc.news start"
 
        sudo -u news /usr/local/news/bin/rc.news start
 
fi
 
 
To stop INN automatically at system shutdown add these lines to the /etc/rc.d/rc.local_shutdown file :
 
 
# stop inn
 
if [ -x /usr/local/news/bin/rc.news ]; then
 
        echo "Stopping inn: sudo -u news /usr/local/news/bin/rc.news stop"
 
        sudo -u news /usr/local/news/bin/rc.news stop
 
fi
 
 
== Getting Articles ==
 
 
The list of Newsgroups to relay can be defined by editing the '''db/active''' file manually (innd must be stopped) or using '''ctlinnd'''. The definitions take no wildcard, meaning that the Newsgroups must be entered one by one. The ISC maintain a [ftp://ftp.isc.org/pub/usenet/CONFIG list].
 
 
# sudo -u news /usr/local/news/bin/rc.news start
 
Starting innd.
 
Scheduled start of /usr/local/news/bin/innwatch.
 
# /usr/local/news/bin/ctlinnd newgroup alt.os.linux.slackware y jpmenicucci@studioware.com
 
Ok
 
 
But as no news peer has been defined in our configuration INN will not get articles for these newsgroups. So we'll have to get them from our Internet Service Provider in an other way, using as alternative the [[sucknews]] software. That's the object of next page.
 
  
 
<br/>
 
<br/>
  
{{pFoot|[[RoundCube]]|[[Main Page]]|[[Sucknews]]}}
+
{{pFoot|[[INN]]|[[Main Page]]|[[Asterisk]]}}

Revision as of 23:35, 6 December 2017

What is Sucknews ?

Sucknews affords getting the newsfeeds over a regular NNTP connection with your Internet Service Provider. This comes in handy when you are not a big company and have no agreements with peer Newsgroup servers.

Installing Sucknews

There was a time when sucknews was available from sucknews.org but this site has disappeared and there seems to be no obvious source of sucknews update software so we'll stick with an old version (perfectly satisfactory anyway). Untar and install as below :

# tar -C /usr/local -xvf suck-x.y.z.tar.gz
# cd /usr/local
# chown -R root:root suck-x.y.z.tar.gz
# cd suck-x.y.z.tar.gz
# ./configure --help | less
# ./configure --prefix=/usr/local/news --libdir=/usr/local/news/lib64 --mandir=/usr/local/man \
--with-inn-lib=/usr/local/news/lib --with-inn-include=/usr/local/news/include --with-perl-exe=/usr/bin
# make
# make install
# make clean
# cd /usr/local/news/bin
# chown news:news lmove rpost suck testhost

Running Sucknews

The below script affords posting the local messages and get entering messages on your Internet Service Provider site :

# su news
$ cd /usr/local/news/bin
$ vi suck.sh
i
#!/bin/sh

NNTP_SERVER=news.free.fr
NEWS_PATH=/usr/local/news
BIN_PATH=/usr/local/news/bin
SUCK_PATH=/usr/local/news/bin
BATCH_PATH=/usr/local/news/spool/outgoing/free
FILTER_PATH=$SUCK_PATH/filter.sh

cd $SUCK_PATH

######################################################
# posting outgoing articles (localhost->NNTP_SERVER) #
######################################################

echo "Sending articles..."
if test -s $BATCH_PATH
then
$BIN_PATH/rpost $NNTP_SERVER -b $BATCH_PATH \
    -f $FILTER_PATH \$\$o=/tmp/filtered \$\$i /tmp/filtered
else
    echo "No articles to post..."
fi

######################################################
# getting incoming articles (NNTP_SERVER->localhost) #
######################################################

echo Getting articles...
if [ -e /tmp/newposts ]; then
    rm /tmp/newposts
fi
$BIN_PATH/suck $NNTP_SERVER -AL $NEWS_PATH/db/active -i 0 -n -H -K -br /tmp/newposts -c
if [ -e /tmp/newposts ]; then
    $BIN_PATH/rnews /tmp/newposts
fi
cat /dev/null > $BATCH_PATH
<esc>
:x
$ chmod u+x suck.sh
$ ./suck.sh
Sending articles...
No articles to post...
Getting articles...
Attempting to connect to news.free.fr
Using Port 119
Official host name: news.free.fr
Address: 212.27.60.38
Address: 212.27.60.39
Address: 212.27.60.37
Address: 212.27.60.40
Connected to news.free.fr
200 news-4.free.fr (4-2) NNRP Service Ready - newsmaster@proxad.net (posting ok)
No sucknewsrc to read, creating
Adding new groups from local active file to sucknewsrc
New Group - adding to sucknewsrc: control
control - 1 articles 3983-3983
New Group - adding to sucknewsrc: control.cancel
control.cancel - 349 articles 94271852-94272200
New Group - adding to sucknewsrc: control.checkgroups
control.checkgroups - 4 articles 5228-5231
New Group - adding to sucknewsrc: control.newgroup
control.newgroup - 1 articles 73186-73186
New Group - adding to sucknewsrc: control.rmgroup
control.rmgroup - 1 articles 30996-30996
New Group - adding to sucknewsrc: junk
junk - 1 articles 38322-38322
New Group - adding to sucknewsrc: alt.os.linux.slackware
alt.os.linux.slackware - 1907 articles 231211-233117
Elapsed Time = 0 mins 0.72 seconds
2227 Articles to download
Deduping Elapsed Time = 0 mins 0.00 seconds
Deduped, 2227 items remaining, 0 dupes removed.
Total articles to download: 2227
5290946 Bytes received in 1 mins 20.79 secs, BPS = 65489.2
Closed connection to news.free.fr
Building RNews Batch File(s)
Cleaning up after myself
news@inner:/usr/local/news/bin$
<ctrl>d
#

In the first part, messages are posted to the provider using rpost. The list of articles is obtained from the information contained in newsfeed file /usr/local/news/spool/outgoing/free. The -f option affords applying a filter to the messages so as to expurge certain headers :

#!/bin/sh
/usr/local/news/bin/sm -R $1 | sed -e "/^X-Trace/d" -e "/^NNTP-Posting-Host/d" \
-e "/^Xref/d" -e "/^X-Complaints-To/d" -e "/^NNTP-Posting-Date/d" > $2

In the second part, suck gets the messages from your Internet Service Provider. The -AL option affords using the list of groups defined. -i 0 indicates that there is no limit on the number of messages to get, -n is for the mode « messages identified by their numbers by the provider » -c is to update the numbers after the end of the operation, -br defines the output file, -H and -K afford skipping the historic and killfile options. The file written is afterwards used by rnews. The articles are then available.

Automating Sucknews

Automate Sucknews execution using crontab. In the example below, suck.sh is executed at 13:00 daily :

# su news
$ crontab -e
# MIN HOUR DAY MONTH DAYOFWEEK COMMAND
00 13 * * * /usr/local/news/bin/suck.sh
30 13 * * * /usr/local/news/bin/news.daily expireover lowmark


INN Main Page Asterisk