readeef е агрегатор на новини. Подобен на спрения Google Reader
Голямата разлика е че се хоства от вас. Вашето копие на readeef ще продължи да работи докато му позволите.. Това е което го прави ваш агрегатор на новини.
Има и някои от очакваните функции за агрегатори, като поддръжка за много потребители и организации на емисии със тагове.

Това са някои от основните характеристики на readeef:

  • Поддръжка на множество потребители
  • Организация на емисии със тагове, както и показване само на непрочетени статии от емисии и тагове
  • Показване на популярни статии от емисиите на потребителя
    • Различни социални сайтове се използват за изчисляване на популярната оценка на статиите
    • Списъкът със сайтове може да се конфигурира
    • Оценяването може да се изключи като цяло
  • Управление на любими статии
  • Търсене на статии
    • Търсенето е ограничено до текущата емисия или таг
  • Уведомления от сървъра за подновени емисии
  • Избор на цветови теми
  • Поддръжка на интернационализация
  • Поддържка на различни услучи с които да се споделят статии
  • Управление на потребители през административна роля
    • Създаване на нови потребители
    • Премахване на съществуващи потребители
  • Управление на емисии
    • Търсене за нови емисии чрез ключови думи или адреси
    • Добавяне и премахване на тагове
  • Pubsubhubbub поддържка
  • Извличане на съдържание на статии
    • Поддържка на различни услуги, като readability
    • По подразбиране се използва GoOse, не изисква конфигурация
  • Извличане на умалена картинка за статия
    • Поддъжат се различни услуги
    • Може да се конфигурира да използва извличане на съдържание за намиране на главната картинка на статия
    • По подразбиране се използва картинка от описанието на статията
  • Конфигурация на различни търсачки
    • По подразбиране се използва Bleve search, не изисква конфигурация
    • За повече екстри, elasticsearch може да бъде конфигуриран
  • Поддръжка на различни бази данни
    • По подразбиране се използва SQLite, защото не изисква конфигуриране. Може да доведе до проблеми със заключване на таблици
    • Настройка на PostgreSQL е за предпочитане
  • По избор може да се включат емулатори на API-та за популярни агрегатори на новини
    • Поддържка на Feed-a-fever API. Сравнително пълна поддръжка. Ако бъде включено, входният адрес ще бъде /api/v2/fever/ (крайната / е важна). Тествано със: Press
    • Поддръжка на Tiny Tiny RSS API. Етикети, бележки, публикуване и архивиране на статии не се поддържа. Ако бъде включено, входният адрес ще бъде /api/v12/tt-rss. Тествано със: Tiny Tiny RSS

Конфигурация

readeef's configuration is written using the same syntax as git's config. It is separated into sections, marked using [ ]. The default configuration values are present in godoc , under the variable `DefaultCfg` Low-level server configuration can also be set in the same file. The server-specific options are showin in godoc , under the variable `DefaultCfg` A typical configuration file might contain the following:

[logger]
    level = error
    file = error.log
    formatter = text
    access-file = access.log
[db]
    driver = postgres
    connect = host=/var/run/postgresql user=readeef dbname=readeef sslmode=disable
[timeout]
    connect = 2s
    read-write = 10s
[feed-manager]
    update-interval = 5m            # At what interval to update the feeds
[api]
    emulators = fever
[content]
    search-provider = elastic       # Sets up elasticsearch
    extractor = readability         # Sets up readability as the content extractor
    article-processors = proxy-http # Adds the proxy-http processor to the list of processors
                                    # Requires that the session middleware is used
                                    # Useful when the web interface is on an https host

    readability-key = MY_READABILITY_KEY        # your readability API key
    elastic-url = http://example.com:19200      
    proxy-http-url-template = "https://example.com/proxy?url={{ . }}" # The {{ . }} denotes where the actual url is supposed to go
[static]
    expires = 24h                   # Sets the max-age to this value in seconds
                                    # and expires header to the current time plus this duration
        
If you want to disable the session middleware, since you don't want to use the proxy-http article processor, you may want to override the list of default middleware:
[dispatcher]
	middleware              # clear any previous values
	middleware = Static
	middleware = Gzip
	middleware = Url        # The uri mw has to be before the i18n
	middleware = I18N       # The i18n mw has to be before the session
	middleware = Logger
	middleware = Context
	middleware = Error      # Should always be the last one wrapping middleware
        
To setup Pubsubhubbub support, you'd have to add the following section to the configuration:
[hubbub]
    callback-url = https://example.com  # The base url of your server
        
readeef supports 3 types of feed processors, each type being invoked at a different time in the feed's 'lifecycle'. The first type are the parser processors. They are invoked on the feed, as soon as content is fetched. The content need not be new, and there is no way to tell so early on. They do quick operations, such as cleaning up the article descriptions by removing script tags and the like, or marking an image in the article description as being the top image of the article. They are configured as follows:
[feed-parser]
	processors                          # The empty entry cleans up any previous values in the array
	processors = cleanup
	processors = top-image-marker
        
Next are the feed monitors. They are called when a feed gets updated, and when it gets deleted. The feeds now know if there are any new articles, and more expensive operations can be done only once for them. Using such monitors, thumbnails may be generated for new articles, and they might be indexed by the current search provider. Their configuration is in the 'feed-manager' category:
[feed-manager]
	monitors
	monitors = index
	monitors = thumbnailer
        
In the above example, the 'index' feed monitor sends new articles to be indexed by whatever search provider is set up. The thumbnailer deals with generating thumbnails. The final category of processors is called whenever articles are fetched from the database. Very similar to the parser processors, they too are supposed to do quick operations on the resulting articles. And some are indeed almost exactly the same as their parser processor counterparts:
[content]
	article-processors
	article-processors = insert-thumbnail-target
	article-processors = proxy-http
	# article-processors = relative-url

	proxy-http-url-template = "/proxy?url={{ . }}"
        
In the above example, the last two processors have a counterpart in the parser stage. 'proxy-http' converts 'src' attributes using the template, defined by 'proxy-http-url-template'. This action is more suitable in this stage, as the changes are not saved in the database, and may be omitted when emulated requests do not send back a session cookie. 'relative-url' merely replaces and 'src' attribute links to their protocol-relative equivalent.

Бързи настройки

readeef is written in Go, and as of September 2014, requires at least version 1.3 of the language. The currently supported databases are PostgreSQL, and SQLite. SQLite support is only built if CGO is enabled. The later is not recommended, as locking problems will occur.
Three binaries may be built from the sources. The first binary is the standalone server. Unless readeef is being added to an existing golang server setup, it should be built as well. Since readeef uses bleve for FTS capabilities, bleve-specific tags (e.g.: leveldb, cld2, etc) should be passed here.

go build github.com/urandom/readeef/cmd/readeef-server
Unless you are using SQLite, readeef will need to be configured as well. A minimal configuration file might be something like this:
[db]
    driver = postgres
    connect = host=/var/run/postgresql user=postgresuser dbname=readeefdbname
            
You may provide The standalone server with a config files. The default server configuration is documented in godoc.org under the variable: DefaultCfg The server will need to be started in the same directory that contains the 'static' and 'templates' directories, typically the checkout itself.
./readeef-server -config $CONFIG_FILE
If the server has been built with the 'nofs' tag, the client-side libraries will need to be fetched. This is best done with bower. Make sure the _.bowerrc_ file, provided with the sources, is in the same directory that contains the 'static' directory. In there, just run the following:
bower update
The second is a user administration script, which can be used to add, remove and modify users. It is not necessary to have this binary, as readeef will create an 'admin' user with password 'admin', if such a user doesn't already exist:
go build github.com/urandom/readeef/cmd/readeef-user-admin
You can now use the script to add, remove and edit users
# Adding a user
./readeef-user-admin -config $CONFIG_FILE add $USER_LOGIN $USER_PASS

# Turning a user into an admin
./readeef-user-admin -config $CONFIG_FILE set $USER_LOGIN admin true
            
The third is a search index management script, which can be used to re-index all articles in the database. It is usually not necessary have this binary, as articles are indexed when they are added to the database. It might be useful if you switch from one search provider to another:
go build github.com/urandom/readeef/cmd/readeef-search-index

./readeef-search-index -config $CONFIG_FILE
            

Екстра детайли

There are some things I wanted to experiment with when writing readeef. The most visible one is the use of Polymer for the web UI. That framework itself highly experimental, it only works on modern browsers, and it seems to favor Chrome quite a bit. So expect rough seas if you are using anything else. Its quite pleasant using it, however, so it kind of balances itself out in the end. That, and web components in general hold a promise for a very bright future in terms of web development.


The other thing is the authorization. I opted out of the traditional cookie-based authorizaton, deciding to experiment a little bit. Instead, some user data is stored in the local storage. Before each API request call, there is also a preliminary nonce request. This nonce, and with the Login and MD5API values from the user data, are used to construct the authorization header. That header is very similar to the AWS authorization header, but is constructed on the client-side, using CryptoJS. The MD5API value is also constructed in the same way, using the username and password during the login phase. This whole approach might raise a few eyebrows, since the client side cryptography is considered harmful. However, this is not a high-risk banking software, and there isn't a lot, if any, personal information that might be had by breaking in. That being said, let this be a warning.
Note: if there is a cookie for the server that goes by the name 'session', that comes from the session middleware, used by the proxy-http processor code.