You are reading content from Scuttlebutt
@dangerousbeans %+iZJZCrZf+HEMR9MhxK6/lITvFeizxd0Kaz9SsdMw2c=.sha256

Can I plumb a FlumeQuery into a FlumeReduce?

Right now my FlumeReduce takes an age to rescan the whole log, but really I want it to index from the data already possible to filter from a FlumeQuery

What I have:

// FlumeQuery
var query = sbot._flumeUse(
      'geoDB', 
      FlumeQuery(3, {indexes:[
        {key: 'pos', value: [['value', 'content', 'x'], ['value', 'content', 'y'], ['value', 'sequence']]}
      ]})
    )

// FlumeReduce
var reduce = ssbServer._flumeUse('actualFriends', flumeView(
      1.3, // version
      reduceData,
      mapToData,
      null, //codec
      initialState()
    ))

So can I get some kinda flume-thing from the first one and plug it into the second one so I can filter out 99.9% of the posts?

@Christian Bundy %1o/NxNqhZ+2mql07VBCKtWXGYp5F5tssviG9wVUjomo=.sha256

@dangerousbeans

Not yet, but I just started a thread about it here: %Sv2DRcG...

@Dominic %HYtjtcEEL1FsqH/n1eGVud1vqNgqqBd53IBE9Hxd/Gg=.sha256

@dangerousbeans what sort of reduce do you want to do? Can you express it using map-filter-reduce query (as supported by flumeview-query)

@dangerousbeans %K+Y2j1MtF6kkrNTwmkXtA4HWt92pzeiVHjP6ymaR1qM=.sha256

@dominic well when I was developing this it would scan through everything until it eventually made console output, but what seemed like it'd be great would be using the existing filter of 'just messages that have any kind of x: int, y: int set" and let it use that to build this second index. That way the index would build in very little time

Join Scuttlebutt now