readeef is a feed aggregator. Similar to the late Google Reader
The major difference is that it is self-hosted. Your readeef instance will remain operational for as long as you let it. This is what makes it your feed reader.
It also has some of the expected features of a feed aggregator, such as multi-user support, and feed organization with tags.

These are just some of the major features of readeef:

  • Multiple user support
  • Organizing feeds with tags, as well as showing only unread articles from feeds and tags
  • Displaying popular articles among the user's feeds
    • Various social sites are used to calculate the articles' popularity scores
    • The list of sites may be configured
    • Scoring may be turned off entirely
  • Managing favorite articles
  • Searching for articles
    • The search is constrained to the current feed or tag
  • Server notification of feed updates
  • Theme selection
  • I18N support
  • Support for various services with which to share articles
  • Admin user management
    • Creating new users
    • Removing existing users
  • Feed management
    • Searching for new feeds using keywords or urls
    • Adding and removing feed tags
  • Pubsubhubbub support
  • Article content extraction support
    • Supports different extractors, such as readability
    • Defaults to the configuration-free GoOse extractor
  • Article thumbnailing support
    • Supports different thumbnailers
    • May be configured to use an extractor to pull the top image from an article
    • Defaults to using an image that is present in the feed article's description
  • Configuration of different search providers
    • Defaults to the configuration-free Bleve search indexer
    • For extra features, elasticsearch may be configured
  • Support for various databases
    • Defaults to SQLite, since it doesn't require any configuration. May cause locking problems!
    • Setting up PostgreSQL is preferred
  • Optional emulation of popular aggregator APIs
    • Feed-a-fever API support. Relatively complete support. If enabled, the entry URL will be /api/v2/fever/ (note the trailing /). Tested using: Press
    • Tiny Tiny RSS API support. Support for article labeling, notes, publishing and arching is missing. If enabled, the entry URL will be /api/v12/tt-rss. Tested using: Tiny Tiny RSS

Configuration

readeef's configuration is written using the same syntax as git's config. It is separated into sections, marked using [ ]. The default configuration values are present in godoc , under the variable `DefaultCfg` Low-level server configuration can also be set in the same file. The server-specific options are showin in godoc , under the variable `DefaultCfg` A typical configuration file might contain the following:

[logger]
    level = error
    file = error.log
    formatter = text
    access-file = access.log
[db]
    driver = postgres
    connect = host=/var/run/postgresql user=readeef dbname=readeef sslmode=disable
[timeout]
    connect = 2s
    read-write = 10s
[feed-manager]
    update-interval = 5m            # At what interval to update the feeds
[api]
    emulators = fever
[content]
    search-provider = elastic       # Sets up elasticsearch
    extractor = readability         # Sets up readability as the content extractor
    article-processors = proxy-http # Adds the proxy-http processor to the list of processors
                                    # Requires that the session middleware is used
                                    # Useful when the web interface is on an https host

    readability-key = MY_READABILITY_KEY        # your readability API key
    elastic-url = http://example.com:19200      
    proxy-http-url-template = "https://example.com/proxy?url={{ . }}" # The {{ . }} denotes where the actual url is supposed to go
[static]
    expires = 24h                   # Sets the max-age to this value in seconds
                                    # and expires header to the current time plus this duration
        
If you want to disable the session middleware, since you don't want to use the proxy-http article processor, you may want to override the list of default middleware:
[dispatcher]
	middleware              # clear any previous values
	middleware = Static
	middleware = Gzip
	middleware = Url        # The uri mw has to be before the i18n
	middleware = I18N       # The i18n mw has to be before the session
	middleware = Logger
	middleware = Context
	middleware = Error      # Should always be the last one wrapping middleware
        
To setup Pubsubhubbub support, you'd have to add the following section to the configuration:
[hubbub]
    callback-url = https://example.com  # The base url of your server
        
readeef supports 3 types of feed processors, each type being invoked at a different time in the feed's 'lifecycle'. The first type are the parser processors. They are invoked on the feed, as soon as content is fetched. The content need not be new, and there is no way to tell so early on. They do quick operations, such as cleaning up the article descriptions by removing script tags and the like, or marking an image in the article description as being the top image of the article. They are configured as follows:
[feed-parser]
	processors                          # The empty entry cleans up any previous values in the array
	processors = cleanup
	processors = top-image-marker
        
Next are the feed monitors. They are called when a feed gets updated, and when it gets deleted. The feeds now know if there are any new articles, and more expensive operations can be done only once for them. Using such monitors, thumbnails may be generated for new articles, and they might be indexed by the current search provider. Their configuration is in the 'feed-manager' category:
[feed-manager]
	monitors
	monitors = index
	monitors = thumbnailer
        
In the above example, the 'index' feed monitor sends new articles to be indexed by whatever search provider is set up. The thumbnailer deals with generating thumbnails. The final category of processors is called whenever articles are fetched from the database. Very similar to the parser processors, they too are supposed to do quick operations on the resulting articles. And some are indeed almost exactly the same as their parser processor counterparts:
[content]
	article-processors
	article-processors = insert-thumbnail-target
	article-processors = proxy-http
	# article-processors = relative-url

	proxy-http-url-template = "/proxy?url={{ . }}"
        
In the above example, the last two processors have a counterpart in the parser stage. 'proxy-http' converts 'src' attributes using the template, defined by 'proxy-http-url-template'. This action is more suitable in this stage, as the changes are not saved in the database, and may be omitted when emulated requests do not send back a session cookie. 'relative-url' merely replaces and 'src' attribute links to their protocol-relative equivalent.

Quick start

readeef is written in Go, and as of September 2014, requires at least version 1.3 of the language. The currently supported databases are PostgreSQL, and SQLite. SQLite support is only built if CGO is enabled. The later is not recommended, as locking problems will occur.
Three binaries may be built from the sources. The first binary is the standalone server. Unless readeef is being added to an existing golang server setup, it should be built as well. Since readeef uses bleve for FTS capabilities, bleve-specific tags (e.g.: leveldb, cld2, etc) should be passed here.

go build github.com/urandom/readeef/cmd/readeef-server
Unless you are using SQLite, readeef will need to be configured as well. A minimal configuration file might be something like this:
[db]
    driver = postgres
    connect = host=/var/run/postgresql user=postgresuser dbname=readeefdbname
            
You may provide The standalone server with a config files. The default server configuration is documented in godoc.org under the variable: DefaultCfg The server will need to be started in the same directory that contains the 'static' and 'templates' directories, typically the checkout itself.
./readeef-server -config $CONFIG_FILE
If the server has been built with the 'nofs' tag, the client-side libraries will need to be fetched. This is best done with bower. Make sure the _.bowerrc_ file, provided with the sources, is in the same directory that contains the 'static' directory. In there, just run the following:
bower update
The second is a user administration script, which can be used to add, remove and modify users. It is not necessary to have this binary, as readeef will create an 'admin' user with password 'admin', if such a user doesn't already exist:
go build github.com/urandom/readeef/cmd/readeef-user-admin
You can now use the script to add, remove and edit users
# Adding a user
./readeef-user-admin -config $CONFIG_FILE add $USER_LOGIN $USER_PASS

# Turning a user into an admin
./readeef-user-admin -config $CONFIG_FILE set $USER_LOGIN admin true
            
The third is a search index management script, which can be used to re-index all articles in the database. It is usually not necessary have this binary, as articles are indexed when they are added to the database. It might be useful if you switch from one search provider to another:
go build github.com/urandom/readeef/cmd/readeef-search-index

./readeef-search-index -config $CONFIG_FILE
            

Extra details

There are some things I wanted to experiment with when writing readeef. The most visible one is the use of Polymer for the web UI. That framework itself highly experimental, it only works on modern browsers, and it seems to favor Chrome quite a bit. So expect rough seas if you are using anything else. Its quite pleasant using it, however, so it kind of balances itself out in the end. That, and web components in general hold a promise for a very bright future in terms of web development.


The other thing is the authorization. I opted out of the traditional cookie-based authorizaton, deciding to experiment a little bit. Instead, some user data is stored in the local storage. Before each API request call, there is also a preliminary nonce request. This nonce, and with the Login and MD5API values from the user data, are used to construct the authorization header. That header is very similar to the AWS authorization header, but is constructed on the client-side, using CryptoJS. The MD5API value is also constructed in the same way, using the username and password during the login phase. This whole approach might raise a few eyebrows, since the client side cryptography is considered harmful. However, this is not a high-risk banking software, and there isn't a lot, if any, personal information that might be had by breaking in. That being said, let this be a warning.
Note: if there is a cookie for the server that goes by the name 'session', that comes from the session middleware, used by the proxy-http processor code.